Return to essay home

Chapter 1: Introduction

How we will respond to the prospect of smarter-than-human AI is the most pressing issue of our time. This essay provides a path forward.
Save the whole paper:
Download PDF

We may be at the end of the human era.

Something has begun in the past ten years that is unique in the history of our species. Its consequences will, to a great extent, determine the future of humanity. Starting around 2015, researchers have succeeded in developing narrow artificial intelligence (AI) – systems that can win at games like Go, recognize images and speech, and so on, better than any human.1

This is amazing success, and it is yielding extremely useful systems and products that will empower humanity. But narrow artificial intelligence has never been the true goal of the field. Rather, the aim has been to create general purpose AI systems, particularly ones often called “artificial general intelligence” (AGI) or “superintelligence” that are simultaneously as good or better than humans across nearly all tasks, just as AI is now super-human at Go, chess, poker, drone racing, etc. This is the stated goal of many major AI companies.2

These efforts are also succeeding. General-purpose AI systems like ChatGPT, Gemini, Llama, Grok, Claude, and Deepseek, based on massive computations and mountains of data, have reached parity with typical humans across a wide variety of tasks, and even match human experts in some domains. Now AI engineers at some of the largest technology companies are racing to push these giant experiments in machine intelligence to the next levels, at which they match and then exceed the full range of human capabilities, expertise, and autonomy.

This is imminent. Over the last ten years, expert estimates for how long this will take – if we continue our present course – have fallen from decades (or centuries) to single-digit years.

It is also of epochal importance, and transcendent risk. Proponents of AGI see it as a positive transformation that will solve scientific problems, cure disease, develop new technologies, and automate drudgery. And AI could certainly help to achieve all of these things – indeed it already is. But over the decades, many careful thinkers, from Alan Turing to Stephen Hawking to the present-day Geoffrey Hinton and Yoshua Bengio3 have issued a stark warning: building truly smarter-than-human, general, autonomous AI will at minimum completely and irrevocably upend society, and at maximum result in human extinction.4

Superintelligent AI is rapidly approaching on our current path, but is far from inevitable. This essay is an extended argument as to why and how we should close the Gates to this approaching inhuman future, and what we should do instead.


  1. This chart shows a set of tasks; many similar curves could be added to this graph. This rapid progress in narrow AI has surprised even experts in the field, with benchmarks being surpassed years ahead of predictions.↩︎
  2. Deepmind, OpenAI, Anthropic, and X.ai were all founded with the specific goal of developing AGI. For instance, OpenAI’s charter explicitly states its goal as developing “artificial general intelligence that benefits all of humanity,” while DeepMind’s mission is “to solve intelligence, and then use that to solve everything else.” Meta, Microsoft, and others are now pursuing substantially similar paths. Meta has said that it plans to develop AGI and release it openly.↩︎
  3. Hinton and Bengio are two of the most cited AI researchers, have both won the AI field’s Nobel, the Turing Prize, and Hinton has won a Nobel prize (in physics) to boot.↩︎
  4. Building something of this risk, under commercial incentives and near-zero government oversight, is utterly unprecedented. There isn’t even controversy about the risk among those building it! The leaders of Deepmind, OpenAI, and Anthropic, among many other experts, have all literally signed a statement that advanced AI poses an extinction risk to humanity. The alarm bells could not be ringing any harder, and one can only conclude that those ignoring them simply are not taking AGI and superintelligence seriously. One goal of this essay is to help them understand why they should.↩︎

Please submit feedback and corrections to taylor@futureoflife.org
Keep The Future Human
Learn how we can keep the future human and deliver the extraordinary benefits of AI – without the unacceptable risk.
by Anthony Aguirre
© Future of Life Institute 2025
arrow-left