Efforts in California to establish first-in-the-nation safety measures for the largest artificial intelligence systems aim to reduce potential risks created by AI.
It would require companies to test their models and publicly disclose their safety protocols to prevent the models from being manipulated to, for example, wipe out the state’s electric grid or help build chemical weapons — scenarios experts say could be possible in the future with such rapid advancements in the industry. The legislation is supported by Anthropic, an AI startup backed by Amazon and Google.
Read more: California Advances Landmark Legislation to Regulate Large AI Models
For more such updates follow us on Google News ITsecuritywire News. Please subscribe to our Newsletter for more updates.