The recent fast progress in AI has resulted both from and in an extraordinary level of attention and investment. This is driven in part by success in AI development, but more is going on. Why are some of the largest companies on Earth, and even countries, racing to build not just AI, but AGI and superintelligence?
Until the past five years or so, AI has been largely an academic and scientific research problem, thus largely driven by curiosity and the drive to understand intelligence and how to create it in a new substrate.
In this phase, there was relatively little attention paid to the benefits or perils of AI among most researchers. When asked why AI should be developed, a common response might be to list, somewhat vaguely, problems that AI could help with: new medicines, new materials, new science, smarter processes, and in general improving things for people.47
These are admirable goals!48 Although we can and will question whether AGI – rather than AI in general – is necessary for these goals, they exhibit the idealism with which many AI researchers started.
Over the past half-decade, however, AI has transformed from a relatively pure research field into much more of an engineering and product field, largely driven by some of the world’s largest companies.49 Researchers, while relevant, are no longer in charge of the process.
So why are giant corporations (and even more so investors) pouring vast resources into building AGI? There are two drivers that most companies are quite honest about: they see AI as drivers of productivity for society, and of profits for them. Because general AI is by nature general-purpose, there is a huge prize: rather than choosing a sector in which to create products and services, one can try all of them at once. Big Tech companies have grown enormous by producing digital goods and services, and at least some executives surely see AI as simply the next step in providing them well, with risks and benefits that expand upon but echo those provided by search, social media, laptops, phones, etc.
But why AGI? There is a very simple answer to this, which most companies and investors shy away from discussing publicly.50
It is that AGI can directly, one-for-one, replace workers.
Not augment, not empower, not make more productive. Not even displace. All of these can and will be done by non-AGI. AGI is specifically what can fully replace thought workers (and with robotics, many physical ones as well.) As support for this view one need look no further than OpenAI’s (publicly stated) definition of AGI, which is “a highly autonomous system that outperforms humans at most economically valuable work.”
The prize here (for companies!) is enormous. Labor costs are a substantial percentage of the world’s ∼ $100 trillion global economy. Even if only a fraction of this is captured by replacement of human labor by AI labor, this is trillions of dollars of annual revenue. AI companies are also cognizant of who is willing to pay. As they see it, you are not going to pay thousands of dollars a year for productivity tools. But a company will pay thousands of dollars per year to replace your labor, if they can.
Countries’ stated motivations for pursuing AGI focus on economic and scientific leadership. The argument is compelling: AGI could dramatically accelerate scientific research, technological development, and economic growth. Given the stakes, they argue, no major power can afford to fall behind.51
But there are also additional and largely unstated drivers. There is no doubt that when certain military and national security leaders meet behind closed doors to discuss an extraordinarily potent and catastrophically risky technology, their focus is not on “how do we avoid those risks” but rather “how do we get this first?” Military and intelligence leaders see AGI as a potential revolution in military affairs, perhaps the most significant since nuclear weapons. The fear is that the first country to develop AGI could gain an insurmountable strategic advantage. This creates a classic arms race dynamic.
We’ll see that this “race to AGI” thinking,52 while compelling, is deeply flawed. This is not because racing is dangerous and risky – though it is – but due to the nature of the technology. The unstated assumption is that AGI, like other technologies, is controllable by the state that develops it, and is a power-granting boon to the society that has the most of it. As we will see, it probably won’t be either.
While companies publicly focus on productivity, and countries on economic and technological growth, for those deliberately pursuing full AGI and superintelligence these are just the start. What do they really have in mind? Although seldom said out loud, they include:
The first three are largely “single-edge” technologies – i.e. likely to be quite strongly net positive. It’s hard to argue against curing diseases or being able to live longer if one chooses. And we have already reaped the negative side of fusion (in the form of nuclear weapons); it would be lovely now to get the positive side. The question with this first category is whether getting these technologies sooner compensates for the risk.
The next four are clearly double-edged: transformative technologies with both potentially huge upsides and immense risks, much like AI. All of these, if they sprung out of a black-box tomorrow and were deployed, would be incredibly difficult to manage.53
The final two concern the super-human AI doing things itself rather than just inventing technology. More precisely, putting euphemisms aside, these involve powerful AI systems telling people what to do. Calling this “advice” is disingenuous if the system doing the advising is far more powerful than the advised, who cannot meaningfully understand the basis of decision (or even if this is provided, trust that the advisor would not provide a similarly compelling rationale for a different decision.)
This points to a key item missing from the above list:
It is abundantly clear that much of what is underlying the current race for super-human AI is the idea that intelligence = power. Each racer is banking on being the best holder of that power, and that they will be able to wield it for ostensibly benevolent reasons without it slipping or being taken from their control.
That is, what companies and nations are really chasing is not just the fruits of AGI and superintelligence, but the power to control who gets access to them and how they’re used. Companies see themselves as responsible stewards of this power in service of shareholders and humanity; nations see themselves as necessary guardians preventing hostile powers from gaining decisive advantage. Both are dangerously wrong, failing to recognize that superintelligence, by its nature, cannot be reliably controlled by any human institution. We will see that the nature and dynamics of superintelligent systems make human control extremely difficult, if not impossible.
These racing dynamics – both corporate and geopolitical – make certain risks nearly inevitable unless decisively interrupted. We turn now to examining these risks and why they cannot be adequately mitigated within a competitive54 development paradigm.