We may be at the end of the human era.
Something has begun in the past ten years that is unique in the history of our species. Its consequences will, to a great extent, determine the future of humanity. Starting around 2015, researchers have succeeded in developing narrow artificial intelligence (AI) – systems that can win at games like Go, recognize images and speech, and so on, better than any human.1
This is amazing success, and it is yielding extremely useful systems and products that will empower humanity. But narrow artificial intelligence has never been the true goal of the field. Rather, the aim has been to create general purpose AI systems, particularly ones often called “artificial general intelligence” (AGI) or “superintelligence” that are simultaneously as good or better than humans across nearly all tasks, just as AI is now super-human at Go, chess, poker, drone racing, etc. This is the stated goal of many major AI companies.2
These efforts are also succeeding. General-purpose AI systems like ChatGPT, Gemini, Llama, Grok, Claude, and Deepseek, based on massive computations and mountains of data, have reached parity with typical humans across a wide variety of tasks, and even match human experts in some domains. Now AI engineers at some of the largest technology companies are racing to push these giant experiments in machine intelligence to the next levels, at which they match and then exceed the full range of human capabilities, expertise, and autonomy.
This is imminent. Over the last ten years, expert estimates for how long this will take – if we continue our present course – have fallen from decades (or centuries) to single-digit years.
It is also of epochal importance, and transcendent risk. Proponents of AGI see it as a positive transformation that will solve scientific problems, cure disease, develop new technologies, and automate drudgery. And AI could certainly help to achieve all of these things – indeed it already is. But over the decades, many careful thinkers, from Alan Turing to Stephen Hawking to the present-day Geoffrey Hinton and Yoshua Bengio3 have issued a stark warning: building truly smarter-than-human, general, autonomous AI will at minimum completely and irrevocably upend society, and at maximum result in human extinction.4
Superintelligent AI is rapidly approaching on our current path, but is far from inevitable. This essay is an extended argument as to why and how we should close the Gates to this approaching inhuman future, and what we should do instead.