Editor’s note: This article is edited and adapted, with permission, from a conversation held at New College, Oxford, on November 19, 2024.
Shivaji Sondhi: It’s an instructive time to discuss the intersection of AI with various domains of human experience. Historically, as a scientist, I tend to think of advances in science and technology. The ordering is often unclear, but these are the deepest drivers. Then you have economics, which might take around 50 years to shape culture. On an even shorter time scale, there’s the classic subject of politics—about power and its exercise and how that plays out. Where might things go on a political time scale, such as five to ten years?
No election to date, including the recently concluded American election, has had the advancement of AI as a major theme. In fact, if you read what some of the heads of major AI labs are saying in their public writings and interviews, you have to wonder why this is the case. For example, Dario Amodei, who was a graduate student in physics at Princeton—where I was director of graduate studies—may end up being the most significant Ph.D. in physics from Princeton, perhaps forever. He wrote an essay saying that he expects human-level performance across the board to be overtaken by developments in their labs as early as two years from now, followed by the proliferation of superhuman or at least post-human intelligent systems. This could lead to a hundred years’ worth of scientific progress occurring in about five to ten years. If this happens in two years, we will really have to wonder where we are.
For today’s discussion, we take a more modest baseline. The idea is that in ten years from now, AI will achieve a crossing point—surpassing even the smartest humans. Ten years is still remarkably short on human scales: a generation is 25 years, and political cycles are five years. The mismatch is between chemistry-based life forms, which we are, and electronics-based systems—the electromagnetic interaction versus chemistry. These operate on different time scales. With this comes the notion of unique human skill hours. I don’t want to say people will become obsolete, but particular skills will become obsolete.
I teach physics, and even today, if you ask GPT-4 fairly sophisticated questions in physics, you can get answers that are not bad most of the time. The reasoning models, publicly available, can handle tasks that an entry-level graduate student might do, such as solving differential equations. So you could ask about remaining unique human skill hours and imagine that over the next ten years, those skills will decline. View that as the backdrop to the conversation we’re about to have. This process is about to start happening, and there’s no way it can proceed without intersecting with the political realm.
With that, maybe I can start with a high-level question: If all of this is going to happen, what might be a desirable trajectory for societies and humans, and what is a likely trajectory? What challenges do you see coming up? What are the signal events? How will people really notice that this is happening?
Dominic Cummings: There’s no way around it: the future is going to be unbelievably chaotic. If you look at how technologies have interacted with human events before—like the railway and the telegraph in the nineteenth century, or later with mass media—these big technological shifts always lead to huge changes in power. They’re unbelievably hard to predict. It’s very hard to know what will centralize and what will decentralize.
For one obvious example, I was reading accounts of French peasants who were drafted into the war with Napoleon. They were thrown across the continent to Moscow and returned with no idea what the war was about. The whole thing was completely baffling to them. There was no real consensus reality in 1800 Europe because of the media and information structure and technology structure.
Fast-forward to, say, 1940: people like Stalin or the head of the BBC, or the head of one of the big American radio and then TV networks, could determine consensus reality for hundreds of millions of people. Stalin could make certain decisions, like removing people from photos or dictating what the Soviet encyclopedia would say. For a brief period, a very small number of people and centralized institutions exerted great power, creating a consensus reality among elites and the public.
What’s happening now seems to be moving toward a situation more like 1800 than 1950. Just over the last 10 years, if you look at people like Barack Obama and Elon Musk, and editorials in The Times, there was a clear consensus reality among elites. Now it’s completely fragmented. People like Musk and his network in Silicon Valley regard mainstream media as either mad or actively malign—something not to be trusted—as opposed to something that defines reality for them.

Old institutions, which were extremely friendly toward people like Musk five to ten years ago, now label them as fascists, deranged, or crazy. It’s become conventional wisdom among political journalists in Britain to criticize Musk as a tech company manager. Over the last three years, they’ve tweeted that he doesn’t understand tech companies and is completely useless. Meanwhile, Musk is achieving impressive feats, like launching skyscrapers into space with precision.
You have this very odd situation. The same thing has happened with elections. Political elites herd toward certain opinions; those opinions clash with reality; the elites become confused and double down. We saw that with Brexit in 2016, with Trump in 2016, when my team was in Number 10 in 2019, and again with Trump now, as well as with Ukraine and Covid. There’s a repeated process where political and mainstream media elites think they’re the sensible ones, while the public are seen as idiots fooled by disinformation and tech companies. A growing counter-elite is emerging, saying that the old political systems are irredeemably corrupt and useless, and we need to build new things.
Looking at these trends, I think it’s almost inevitable that things will continue to be chaotic. The political world is determined not to change in various ways. In Britain, we’ve had the biggest pandemic in a century, the largest land war in Europe since Hitler, and Brexit before that. All three led to an extreme determination within Westminster to say we will not compromise the electorate, change tactics, or adapt our institutions to absorb these technological changes. In fact, it’s the exact opposite.
One simple story to illustrate: in 2020, I brought in various people from physics, data science, and AI, and built a team in Number 10 to bring elite talent to the PM’s office. We started this in January, before Covid began, but obviously, Covid gave it a huge boost. This had a dramatic effect—extremely capable technical talent was sitting 30 meters away from the PM and available to give advice.
On Day One of the new government, when Keir Starmer arrived in Number 10, one of the first things that happened was he was given lots of documents to sign. One of the documents was to sign off on the destruction of this team, this unit. So the data science and AI team built to support the prime minister—a concept copied worldwide—is now being dismantled by the cabinet office. The people have been told they’re not welcome and are being pushed out. This illustrates the pathological nature of these old political institutions. Even as this revolution is happening, they see power as a zero-sum game and don’t want to open up political institutions to new tools, people, and talent because it’s fundamentally threatening.
The core issue is that in the cabinet office and Treasury, information is power. If the prime minister has access to the best information and tools that others can’t use or understand, it upends power relationships. Traditionally, the prime minister’s office isn’t in control of the government; the cabinet office and Treasury are. Hence, the destruction of this team ensures that the old ways continue.
Shivaji Sondhi: Let me theorize a bit about what you said in this context. Is it fair to say that some of the political chaos, not just in Britain but also in America, where America is very polarized, and in Israel, especially until the war, is a result of the information revolution? The information revolution really dates to the 1990s. We’re now getting to AI, but we’ve already been 30 years into an enormous expansion of information flows. Is technology preparing the way for itself by making it impossible for normal political processes to work? Obviously, you’re suggesting there are better ways of doing it.
Dominic Cummings: I think clearly a lot of these things are happening across various countries and regimes. Covid is a very good example of that in many ways. Western bureaucracies failed in very similar ways almost everywhere, and subsequently, they behave similarly almost everywhere.
Shivaji Sondhi: You’re highly pessimistic about whether existing states will be able to cope with AI.
Dominic Cummings: I think so. If you look at how they handled, for example, gain-of-function research—how they regulated it before the pandemic—it was already completely insane and disastrous. You’d have thought that after COVID, that would have been a wake-up call to actually get a grip on gain-of-function research. But no, of course not. Instead, we’ve continued with these completely mad attitudes toward that kind of research. The CDC and the FDA in America have continued to block or undermine a lot of proper inspections. We’ve seen massive cover-ups in bureaucracies across the Western world regarding how they handled all of this and what they funded.
Again, gain-of-function research—compared with AI—is almost trivially simple. It’s a matter for experts to sit down, weigh the pros and cons, decide how to regulate it, and then do so. It’s relatively straightforward compared with the extraordinary issues surrounding AI, which involve a whole range of complete unknowns that we can’t even sensibly weigh yet. Even the greatest experts on the planet, the ones actually building these systems, completely disagree with one another. So, if these old, pathological political institutions can’t cope with relatively simple things like gain-of-function research—or even have honest debates about their own incompetence—then it seems practically impossible to imagine them coping with AI.
Shivaji Sondhi: Let’s consider the international implications. Both Dario Amodei’s essay and another one by Leopold Aschenbrenner—called “Situational Awareness”—suggest that as this AI explosion unfolds and generates an enormous amount of latent power, the United States, leading the Western world, should race ahead of China, given the current international situation. The goal might be to secure perhaps a two-year lead, which, with superintelligent systems, could be equivalent to a 20- or 50-year lead. Then, from that position of strength, the U.S. could negotiate with China. Is that at all plausible? You can address both the desirability of such a course and whether it’s even possible for the very states you describe—those seemingly incapable of basic tasks—to pull off anything like this.
Dominic Cummings: If you read some classic books, like Now It Can Be Told by General Groves (who was Oppenheimer’s boss and ran the Manhattan Project), or you look at how the American state functioned decades ago—running Apollo, the ICBM project, ARPA, PARC, Xerox—you’ll see that that America doesn’t exist anymore. Part of the reason is that the sort of “General Groves-type people” are now people like Elon Musk or Patrick Collison; they’re off building SpaceX, Stripe, and so on.
One key factor in our current predicament is a massive “talent collapse” in Western politics, which is historically very dramatic. In the past, a significant portion of the intellectual and practical elite, the people who could actually build things, were directly involved with political power. For instance, if you read about the 1790s and early 1800s in Whitehall, it’s astonishing how similar the Whitehall of 1795 was to SpaceX in 2024; whereas Whitehall in 2024 is nothing like SpaceX. Stories from the 1790s have a real Silicon Valley feel: Prime Minister Pitt and his ministers would call people in, promote young talent, and move incredibly fast. Their motto could have been Marc Andreessen’s “build, build, build.” They created extraordinary state capacity by closely linking the political and Whitehall communities with the private sector. Procurement was taken extremely seriously, with constant debates, and people got jailed for corruption. That’s inconceivable now, when procurement is a disaster, and failing officials are given seats in the House of Lords rather than getting locked up.
That’s one huge shift. The America of 1950 is no longer running Washington. Leopold Aschenbrenner’s essay argues that the U.S. and the West broadly should race ahead of China as the AI explosion unfolds. Leopold is a thousand times smarter than I am, and there’s a lot of interesting material there. But I think many of these people need to read the same classics my old tutor had me read. If you look at Thucydides’ History of the Peloponnesian War, and see how that conflict started with escalating tensions, it’s hard to imagine a scenario where the U.S. says, “We’re just going to build this superintelligence that renders all existing military technology irrelevant. Don’t worry, we’re the good guys,” and then explains to 1.4 billion Chinese how the new world order will be structured—while expecting them to agree and negotiate later. That’s never going to happen. China would take extreme measures to stop it because it would perceive it as yet another instance of Western hypocrisy and conniving.
We don’t even have to consider the further-out scenarios of what these AI systems could do. We already know from history—and from declassified archives—how events like the Cuban Missile Crisis nearly spiraled out of control, largely thanks to hidden tactical nukes, miscommunications, and rampant uncertainty. Time and again we see governments miscalculate, then discover decades later how wrong their assumptions were—1914, 1939, Cuba, and so on. Now add nuclear weapons (a million times worse than anything in 1914) and 20-minute decision windows instead of two months.

On top of that, imagine the confusion caused by advanced AI: “Are they even in control of their systems? Who wrote that press release—the government or an AI? Who’s really operating their cyber operations?” We’re already seeing how miscalculations around someone like Putin—who is essentially a black box to Western intelligence—can set off a chain of confusion in Ukraine. Now overlay that with AI-accelerated automation and even less clarity about who’s in command.
It seems obvious we shouldn’t start another escalating crisis with China. Actually, we should do the opposite. Lee Kuan Yew’s memoirs about Sino-American relations are instructive. For decades, there was bipartisan agreement in Washington about “One China,” with the preference that any reunification of Taiwan happen peacefully, not through violence. However, President Biden muddied the waters over the last few years, seemingly changing that policy but then having his statements walked back by the White House—leaving China uncertain about U.S. intentions. If you look at the debate in Washington, it’s clear attitudes have shifted: there’s much more talk of “We can’t let Taiwan be unified with China.” Beijing hears this as “We intend to destroy China.” Factor in Xi’s timeline on Taiwan, the military’s viewpoint, the critical importance of semiconductors (TMSC), and AI’s rapid development, and you have the recipe for a classic catastrophe.
Top Photo: Andriy Onufriyenko / Moment via Getty Images