AI development too fast and risky, says Max Tegmark
Max Tegmark, a co-founder of the Future of Life Institute and a professor of physics at the Massachusetts Institute of Technology, has warned that tech firms are locked in a “race to the bottom” to develop powerful artificial intelligence systems without considering the potential risks. Tegmark was the organizer of an open letter in March 2023, calling for a six-month pause in developing giant AI systems that could surpass human intelligence and control. The letter was signed by more than 30,000 people, including Elon Musk and Steve Wozniak, but failed to secure a moratorium from leading AI companies such as Google, OpenAI and Microsoft.
Tegmark told the Guardian that he did not expect the letter to stop tech companies from working towards AI models more powerful than GPT-4, the large language model that powers ChatGPT, because competition has become too intense. “I felt that privately a lot of corporate leaders I talked to wanted [a pause] but they were trapped in this race to the bottom against each other. So no company can pause alone,” he said.
The letter warned of an “out-of-control race” to develop minds that no one could “understand, predict, or reliably control”, and urged governments to intervene if a moratorium on developing systems more powerful than GPT-4 could not be agreed between leading AI companies. It asked: “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilisation?”
AI risks comparable to pandemics and nuclear war, say experts
Tegmark said he viewed the letter as a success, as it raised awareness and sparked debate about the dangers and benefits of AI. He pointed to a political awakening on AI that has included US Senate hearings with tech executives and the UK government convening a global summit on AI safety in November 2023. He also said that expressing alarm about AI had gone from being taboo to becoming a mainstream view since the letter’s publication.
The letter from his thinktank was followed in May 2023 by a statement from the Center for AI Safety, backed by hundreds of tech executives and academics, declaring that AI should be considered a societal risk on a par with pandemics and nuclear war. The statement said: “AI is transforming our world faster than ever before. It has the potential to bring immense benefits to humanity, but also unprecedented challenges and threats. We need to ensure that AI is aligned with our values and goals, and that it does not harm us or undermine our autonomy.”
AI ethics and regulation needed to prevent harm, say advocates
Tegmark said he hoped that the letter and the statement would inspire more action from tech companies, governments and civil society to ensure that AI is developed in a safe and ethical manner. He said he was encouraged by some initiatives, such as the Partnership on AI, which brings together industry leaders and researchers to establish best practices and standards for AI. He also praised some tech companies for being more transparent and accountable about their AI projects, such as Google’s Responsible AI team and OpenAI’s Alignment Research Center.
However, he also said that more needs to be done to prevent harmful or malicious use of AI, such as cyberattacks, surveillance, discrimination or manipulation. He said he supported the idea of creating an international body or treaty to oversee and regulate AI development and deployment, similar to the International Atomic Energy Agency or the Paris Agreement on climate change. He said: “We need to have some kind of global governance mechanism for AI, because it’s not enough to have national laws or regulations. AI is a global phenomenon, and it can cross borders easily. We need to have some kind of international cooperation and coordination to ensure that we don’t end up in a situation where we have rogue actors or rogue states that use AI for nefarious purposes.”