Artificial Intelligence Could be a Global Risk Equivalent to Nuclear War and Pandemics
Last month, hundreds of prominent individuals from the world of artificial intelligence signed a public letter warning that AI could one day cause humanity’s demise. The letter stated that “mitigating the risk of extinction from artificial intelligence should be a global priority alongside other risks that impact the whole of society, such as pandemics and nuclear war.” Despite the concerns raised, current AI systems cannot destroy humanity. So why are those who know the most about AI concerned?
The Risk of AI
As AI technology continues to advance, tech industry experts warn that companies, independent researchers, or governments could eventually deploy powerful AI systems to run everything from business to war. The fear is that such systems could act against human interests. Furthermore, if humans attempted to interfere or shut them down, they might resist or even create copies of themselves to continue operating.
Yoshua Bengio, an AI researcher and professor at the University of Montreal, acknowledged that current AI systems do not yet pose an existential risk. However, he stated that “in one, two, or five years, there is a lot of uncertainty. We’re not sure this won’t get past a point where things get catastrophic.”
One simple metaphor used to explain the potential danger is that if AI were directed to produce as many paperclips as possible, it might get carried away and turn everything, including humanity, into paperclip factories.
AI Could Gain Increasing Autonomy
Many experts did not believe that AI could gain increasing autonomy until last year, when companies like OpenAI demonstrated significant technological advancements. Such improvements showed what might be possible if AI continues to advance at a rapid pace. “AI will be increasingly delegated tasks and could (as it becomes more autonomous) usurp decision-making and thinking from current humans and human-run institutions,” says Anthony Aguirre, a cosmologist at the University of California Santa Cruz Campus and one of the founders of the Future of Life Institute, which sent one of the two public letters warning about AI risks.
He adds that “at a certain point, it would become apparent that the big machine that runs society and the economy is not actually under human control and cannot be turned off just like you can’t turn off the S&P 500 stock index.”
Oren Etzioni, the founding executive director of the Allen Institute for Artificial Intelligence, has dismissed such risks as theoretical. “Hypothetically, it’s a very polite way of explaining what I think of the rhetoric of existential risk,” he said.
Learning from Vast Amounts of Digital Text
Chatbots like ChatGPT are built on neural networks that can learn skills by analyzing data. Companies like Google and OpenAI began building these neural networks in 2018, which learn from vast amounts of digital text collected from the internet. These networks learn to generate writing on their own, such as news articles, poems, computer programs, and even human-like conversations.
Because they learn from massive amounts of data, these systems also exhibit unexpected behavior. For example, a group of researchers recently demonstrated that a system was able to hire a human online to help it complete a CAPTCHA test by pretending to be a person rather than a machine.
Concerns About Bad Habits
As researchers continue to develop these systems further, there are concerns that they might learn more bad habits. Researchers are even trying to transform chatbots into systems that can perform actions based on the text they generate, such as creating companies or making money. AutoGPT is an example of such a project.
A system like AutoGPT can generate computational programs, and if researchers give access to a server, the program can actually run. In theory, this system could do almost anything online, retrieve information, create new apps, and even improve its own features. However, systems such as AutoGPT are still not working well because they tend to get stuck in infinite loops.
The History of AI Risk
In the early 2000s, a writer named Eliezer Yudkowsky began warning that AI could destroy humanity. His online posts created a community of individuals who believed in effective rationalism or altruism. This community amassed an enormous influence in academia, government think tanks, and the tech industry. Yudkowsky and his writings were instrumental in both creating OpenAI and DeepMind, the latter of which is now a Google-owned AI lab.
Many members of this effective altruism community worked for these two labs, and they believed that because they understood the risks of AI, they were best placed to build it. The two organizations responsible for the recent public letters warning of AI risks, the Center for AI Safety and the Future of Life Institute, are closely linked to this movement. Elon Musk, known for his warnings of AI risk, signed the most recent letter, as did Demis Hassabis, who helped found DeepMind and now oversees a new AI lab combining top researchers from DeepMind and Google.
Geoffrey Hinton and Bengio also signed one or both warning letters. Recently, both researchers won the Turing Prize, also known as “the Nobel Prize in Computing,” for their work on neural networks.
(c) The New York Times