According to the Future of Life Institute’s newly released AI Safety Index, the safety mechanisms used by companies such as Anthropic, OpenAI, xAI, and Meta “fall significantly short of emerging international requirements.”
The independent panel of experts who conducted the assessment found that, while companies are racing to create superintelligence, none has developed a reliable methodology for controlling such robust systems.
The publication of the study coincided with incidents: in recent months, several cases of suicide and self-harm related to AI chatbots have been recorded, raising new questions about the potential harm of “smarter than a human” systems.
“Despite recent scandals involving AI hacking, as well as cases of AI-induced mental disorders and self-harm, American AI companies remain less regulated than even restaurants and continue to lobby against mandatory safety standards actively,” said Max Tegmark, a professor at MIT and president of the Future of Life Institute.
The race in AI continues, with tech giants pouring hundreds of billions of dollars into expanding and improving machine learning systems.