A new edition of the AI Safety Index, published by the Future of Life Institute, concludes that major artificial intelligence developers, including OpenAI, Anthropic, xAI, and Meta, do not meet emerging global safety standards. An independent team of experts evaluated the practices of these companies and found that none have a sufficiently robust strategy for controlling the superintelligent systems they are developing. The assessment highlighted a clear divide in the companies' approaches to safety, with Anthropic, OpenAI, and Google DeepMind at the forefront, but with low scores, the best performer receiving only a 'C+'. The index evaluated 35 safety indicators, including risk assessment practices and whistleblower protection. The report recommends that companies adopt measures such as increased transparency, the use of independent safety evaluators, and reducing lobbying against regulation.
Thursday 06:34
IT&C knowledge
Foto: pixabay.com