Keep The Future Human LogoKeep The Future Human

Keep The Future Human

This essay makes the case for why and how we should close the gates to AGI and superintelligence, and what we should build instead.

If you just want the key takeaways, go to the Executive summary. Then, Chapters 2-5 will provide some background on the types of AI systems discussed in the essay. Chapters 5-7 explain why we might expect AGI to arrive soon, and what might happen when it does. Finally, Chapters 8-9 outline a concrete proposal to prevent AGI from being built.

Download PDF

Total reading time: 2-3 hours

Chapter navigation

Executive summary

A high-level overview of the essay. If you're short on time, get all the main points in just 10 minutes.

Chapter 1: Introduction

How we will respond to the prospect of smarter-than-human AI is the most pressing issue of our time. This essay provides a path forward.

Chapter 2: Need-to-knows about AI neural networks

How do modern AI systems work, and what might be coming in the next generation of AIs?

Chapter 3: Key aspects of how modern general AI systems are made

Most of the world's most cutting-edge AI systems are made using surprisingly similar methods. Here are the basics.

Chapter 4: What are AGI and superintelligence?

What exactly are the world's biggest tech companies racing to build behind closed doors?

Chapter 5: At the threshold

The path from today's AI systems to fully-fledged AGI seems shockingly short and predictable.

Chapter 6: The race for AGI

What are the driving forces behind the race to build AGI, for both companies and countries?

Chapter 7: What happens if we build AGI on our current path?

Society isn't ready for AGI-level systems. If we build them very soon, things could get ugly.

Chapter 8: How to not build AGI

AGI is not inevitable – today we stand at a fork in the road. This chapter presents a proposal for how we could prevent it from being built.

Chapter 9: Engineering the future: what we should do instead

AI can do incredible good in the world. To get all of the benefits without the risks, we must ensure that AI remains a human tool.

Chapter 10: The choice before us

To preserve our human future, we must choose to close the Gates to AGI and superintelligence.

Appendixes

Supplementary information, including: Technical details around compute accounting, an example implementation of a 'gate closure', details for a strict AGI liability regime, and a tiered approach to AGI safety & security standards

Acknowledgements

A few thank-yous to people who contributed to Keep The Future Human.

Please submit feedback and corrections to taylor@futureoflife.org.

On this page