There are not too many alternatives that can compete with a Nobel Prize in spotlighting humanity’s greatest accomplishments—and its greatest challenges. After this year’s announcements in Stockholm, many in the artificial intelligence community are in a celebratory mood. At a time when AI’s adoption has been uneven and even Goldman Sachs—arguably among the most reliable peddlers of AI hype—has climbed down from the dizzying heights of unfettered optimism, the Nobels have given the technology a fresh boost. With declarations of AI’s “Nobel moment” and the prizes in physics and chemistry being considered its “coming out party”, there is renewed hope that, finally, AI is poised to shape history.
“The scientific Nobel Prizes have always, in their way, honored human intelligence. This year, for the first time, the transformative potential of artificial intelligence (AI) has been recognized, as well”, the Economist wrote.
Not so fast. AI might have won scientific acknowledgement, but more critical was the recognition of the risks that come with AI’s unfettered growth.
First, look beyond the prizes explicitly noting accomplishments in AI to the signals being conveyed by the others, as well. Take the Nobel Peace Prize. It was awarded to Nihon Hidankyo, an organization that represents survivors of the atomic bombs that were dropped over Hiroshima and Nagasaki in Japan, with the goal of eliminating the possible deployment of nuclear weapons again. Henrik Urdal, director of the Peace Research Institute, said, “In an era where automated weapon systems and AI-driven warfare are emerging, their call for disarmament is not just historical, it is a critical message for our future”. Its Nobel Peace Prize ought to be read with this message in mind, as a warning about the dangers of cataclysmic outcomes when technologies—such as AI—spin out of human control.
Can AI’s use lead to such outcomes? Indeed, the Nobel Prize in physics was shared by Geoffrey Hinton, who is known as the “godfather of AI” and is as a leading voice of caution about the potential for the technology to become more intelligent than us and take control. Hinton is troubled by the unfettered development of a technology that could bypass human decision-makers and institutional safeguards and, indeed, launch nuclear attacks. In so many ways, Hinton follows in the tradition of Frédéric Joliot, who won a Nobel Prize in chemistry with his wife, Irène Joliot-Curie, for discovering the first artificially created radioactive atoms. Joliot warned of cataclysmic consequences in the use of atomic energy—the very cataclysm that the Nihon Hidankyo collective has lived through.
Beyond Hinton’s unease about his own work, this year’s chemistry prize was shared by Demis Hassabis and John Jumper, who led the use of AI to predict the structure of all of life’s molecules—opening the door to numerous medical discoveries in drugs, treatments, and other therapies. While an AI pioneer no doubt, Hassabis, too, has a cautionary message for the enthusiasts; he has urged for AI’s risks to be taken as seriously as global issues like climate change, with its present-day catastrophic effects. Hassabis can also afford to offer cautionary warnings, as his company DeepMind has been acquired by Google. Meanwhile, the industry’s wider obsession with chasing generative AI risks drawing resources away from further advances in DeepMind’s tools. Those tools, if developed further, have life-saving implications. DeepMind’s most essential tool, AlphaFold, can predict the structure of nearly every protein known to science, which could revolutionize drug discovery, disease research, and how proteins interact with various molecules in the human body. Hopefully, the Nobel recognition will now help redirect resources toward accelerating discoveries that can help realize AlphaFold’s potential. That said, Hassabis had a wider warning. Speaking on climate change, he cautioned, “It took the international community too long to coordinate an effective global response to this, and we’re living with the consequences of that now. We can’t afford the same delay with AI”. A different Nobel voice; a different form of disquietude.
The laureates-to-be had, nonetheless, harmonized their worries in a highly publicized open letter in May 2023, calling for the risk of human extinction due to AI to be considered a global priority alongside other major worldwide crises. Notably, Hinton was the first signatory with Hassabis as third in a list of over 350 signatures from luminaries.
But the Nobel rumblings do not end there. In fact, there is even a discord in the chorus, the Nobel Prize in economics. The 2024 prize was shared by the Massachusetts Institute of Technology’s (MIT) Daron Acemoglu, who considers the focus on AI’s human extinction risks to be misguided. Acemoglu argued that, instead, we should investigate the motivations of the actors that control AI development and if they should be regulated; he worries that the chorus of voices speaking out about distant risks drowns out the more immediate risks. Those risks stem from a wide variety of causes: AI-aided state surveillance control of the technology in the hands of too few firms, businesses hastily disregarding the value of human workers and replacing them with machines, and the resulting income and societal inequalities. Acemoglu also frets that the productivity gains of AI are oversold. His concerns and his Nobel recognition arise, in part, from an analysis of the history of transformational technologies with fellow 2024 Nobel laureate in economics, Simon Johnson, also of MIT. They found that a narrow elite benefited from the prosperity associated with breakthrough technologies, from steam engines, to the spinning jenny, to assembly lines, while much of society didn’t get to share in that.
Whether one heeds the warnings from Acemoglu and Johnson, from Hinton and Hassabis, or even from Nihon Hidankyo, this is more a Nobel moment for AI risk, rather than for AI itself. This season of announcements is a timely warning that if we simply forge ahead in investing a trillion dollars in AI—as is anticipated—and fail to invest in mitigating the many risks accelerated by AI’s uncontrolled development, we may have to reserve a future Nobel Peace Prize for the survivors of the technology’s ill effects, a Nihon Hidankyo for the AI age.
What are the risk mitigation areas in which we must invest? There are several that ought to be on the next Nobel aspirant’s checklist: improved oversight; accountability and controls within organizations and in multilateral fora, along with regulatory legal frameworks and standards; training of human “pilots” accompanying AI and retraining of the workforce affected by AI; reducing the dependence on a few AI bottlenecks for both companies and governments; expanding the benefits—at scale—of AI used for education, health care, and financial access, which will ensure timely information that leads to critical interventions for the widest proportion of the population; and widening the geographic footprint of the AI industry and the datasets used to train algorithms to ensure a more inclusive dispersion of AI benefits, possibly making its availability a global public good, much like the core internet.
AI’s real Nobel moment may come when we have invested in these areas. Then we will have developed an exciting technology that is a true societal multiplier, rather than what it is today: a very efficient multiplier of all our divides.
Bhaskar Chakravorti is the dean of global business at Tufts University’s Fletcher School of Law and Diplomacy. He is the founding executive director of Fletcher’s Institute for Business in the Global Context, where he established and chairs the Digital Planet research program.