Existential Safety Is AI Industry’s Core Weakness, Study Warns

Existential Safety Is AI Industry’s Core Weakness, Study Warns

Category: Global Government & Alliances

Summary:
A study by the Future of Life Institute (FLI) published on December 3 evaluated eight major AI developers, including U.S. companies Anthropic, OpenAI, Google DeepMind, Meta, and xAI, as well as Chinese firms Z.ai, DeepSeek, and Alibaba Cloud. The Winter 2025 AI Safety Index assessed these companies across six themes such as current harms, safety frameworks, and existential safety. The expert panel conducting the review found that all companies demonstrated a significant weakness in planning for existential safety, despite the highest-scoring developers' efforts. FLI highlighted that increasing transparency in company practices aims to encourage improvements in managing extreme risks from future AI models that could match or surpass human capabilities.

Tags: Future of Life Institute, AI developers, U.S. companies, Chinese firms, AI Safety Index, existential safety, transparency

Source Excerpt:

Eight major artificial intelligence (AI) developers are failing to plan for how they would manage extreme risks posed by future AI models that match or surpass human capabilities, according to a study by the Future of Life Institute (FLI) published Dec. 3. FLI’s Winter 2025 AI Safety Index assessed U.S. companies Anthropic, OpenAI, Google DeepMind, Meta, and xAI, and Chinese companies Z.ai, DeepSeek, and Alibaba Cloud across six themes, which included current harms, safety frameworks, and existential safety. The independent panel of experts who conducted the review found that even with the highest-scoring developers, “existential safety remains the industry’s core structural weakness.”......

Original Article: Read the full story →

Source: Algemeiner

Posted on 12-05-2025 14:06

Back to blog