The last time humanity shared the Earth with other minds that spoke, thought, built technology, and did general-purpose problem solving was 40,000 years ago in ice-age Europe. Those other minds went extinct, wholly or in part due to the efforts of ours.
We are now re-entering such a time. The most advanced products of our culture and technology – datasets built from our entire internet information commons, and 100-billion-element chips that are the most complex technologies we have ever crafted – are being combined to bring advanced general-purpose AI systems into being.
The developers of these systems are keen to portray them as tools for human empowerment. And indeed they could be. But make no mistake: our present trajectory is to build ever-more powerful, goal-directed, decision-making, and generally capable digital agents. They already perform as well as many humans at a broad range of intellectual tasks, are rapidly improving, and are contributing to their own improvement.
Unless this trajectory changes or hits an unexpected roadblock, we will soon – in years, not decades – have digital intelligences that are dangerously powerful. Even in the best of outcomes, these would bring great economic benefits (at least to some of us) but only at the cost of a profound disruption in our society, and replacement of humans in most of the most important things we do: these machines would think for us, plan for us, decide for us, and create for us. We would be spoiled, but spoiled children. Much more likely, these systems would replace humans in both the positive and negative things we do, including exploitation, manipulation, violence, and war. Can we survive AI-hypercharged versions of these? Finally, it is more than plausible that things would not go well at all: that relatively soon we would be replaced not just in what we do, but in what we are, as architects of civilization and the future. Ask the neanderthals how that goes. Perhaps we provided them with extra trinkets for a while as well.
We don’t have to do this. We have human-competitive AI, and there’s no need to build AI with which we can’t compete. We can build amazing AI tools without building a successor species. he notion that AGI and superintelligence are inevitable is a choice masquerading as fate.
By imposing some hard, global limits, we can keep AI’s general capability to approximately human level while still reaping the benefits of computers’ ability to process data in ways we cannot, and automate tasks none of us wants to do. These would still pose many risks, but if designed and managed well, be an enormous boon to humanity, from medicine to research to consumer products.
Imposing limits would require international cooperation, but less than one might think, and those limits would still leave plenty of room for an enormous AI and AI hardware industry focused on applications that enhance human well-being, rather than on the raw pursuit of power. And if, with strong safety guarantees and after a meaningful global dialogue, we decide to go further, that option continues to be ours to pursue.
Humanity must choose to close the Gates to AGI and superintelligence.
To keep the future human.
Thank you for taking the time to explore this topic with us.
I wrote this essay because as a scientist I feel it is important to tell the unvarnished truth, and because as a person I feel it is crucial for us to act quickly and decisively to tackle a world-changing issue: the development of smarter-than-human AI systems.
If we are to respond to this remarkable state of affairs with wisdom, we must be prepared to critically examine the prevailing narrative that AGI and superintelligence ‘must’ be built to secure our interests, or is ‘inevitable’ and cannot be stopped. These narratives leave us disempowered, unable to see the alternative paths ahead of us.
I hope you will join me in calling for caution in the face of recklessness, and courage in the face of greed.
I hope you will join me in calling for a human future.
– Anthony