What makes humanity worth protecting?
Ord’s metaphor of the Precipice paints a startling picture of humanity; in his view, we are quite literally standing on the brink of possible civilization collapse. However, what makes the metaphor so powerful is what Ord imagines lies beyond the Precipice, once we cross the ledge. As the book progresses through the various existential risks facing humanity today (both natural and anthropogenic), one cannot help but hope for humanity to dodge these risks, and reach wonders, beauties, and experiences that we cannot even begin to imagine in the present moment.
The book has three major components: the Stakes, the Risks, and the Path Forward. In the Stakes, Ord expertly guides us through major historical periods in the development of our current civilization. He shows how the world is better now than ever (in an argument that is reflective of Steven Pinker), and how the rate of technological progress, which is what has been responsible for our dramatic increase in the quality of life, will only rise. Once the stage is set, Ord proceeds to then provide a surprisingly robust analysis of the many dangers that humanity currently faces (Risks), and offers some ideas and best practices on how we might go about mitigating them (Path Forward).
A key point we should understand: this book is concerned only with Existential risk; the risk that humanity loses its full potential. Without defining “full potential” too explicitly, Ord sees two main ways in which we would lose access to it: the extinction of humanity or irrevocable civilization collapse. By civilization collapse, he means a situation in which our ability to form concrete social institutions, and any sort of organized culture, is forever lost to us. There are also other less obvious of losing our full potential, such as being locked into certain systems of order (i.e. a dystopian state) that would serve only as a system of oppression and prevent us from continuing to develop advanced technology or culture.
I thoroughly enjoyed this book, and it balances a wide breadth of knowledge with a stunning amount of detail. This book pushed me to zoom out, and to think deeply about our long-term future. I found myself pondering several different lines of thought as I moved through the book.
The first, and perhaps most obvious, is whether the destruction of humanity’s potential would be as terrible as Ord describes. This thought is inspired by a NYTimes op-ed written a few years ago by a philosopher. They described how, if all of humanity were to disappear, instantaneously, and painlessly, the entire affair would not be so tragic.
The earth would heal itself. Animals would roam free. There would be less suffering. This idea tickled my mind for most of the book, and I oftentimes doubted the importance of Ord’s arguments.
However, Ord makes clear that this perspective is flawed; humans stand at a special place in the animal kingdom, being (most likely) the only beings gifted with thought, advanced language, and civilization. This has in turn created concepts such as Truth, Beauty, and Justice, that only we are aware of. Whether these ideals exist outside the confines of our own minds is an interesting discussion in its own right, but if we take these ideas to be intrinsically good, then it is easy to say that we have a responsibility to advance them, master them, and build a universe where they are prevalent.
Still though, I have reservations. Ord uses the argument that we often mis-perceive the scales of time that stretch ahead and behind us. There are possibly billions of billions of people who have not yet come into existence, likely orders of magnitude greater than the number of people who live now, or who have ever lived in the history of the universe. This fact alone would make prioritizing the prosperity and health of future generations of utmost importance. Arguments like this are not new in philosophy; the idea of “possible” persons was even introduced in a Bioethics course I took in undergrad.
I hesitate, however, to adhere to the idea that possible persons must exist, or that their interests are of any significant importance. Imagine you and your significant other are thinking about having a child. There exists a possible person. For whatever reason, one of you decides they no longer want a child. In fact, they leave the relationship altogether. If the interests of possible people are of any significance, then we would see two people harmed: the person remaining in the relationship and the unborn child who is effectively prevented from ever existing. Yet, it seems obvious that there is only one person harmed: the living person.
The example above is not complete, but gets at my general point. It is unclear that we should (and must) protect future people, simply on the basis that they will exist. Ord does cite another reason why they are important, however. He states that we should seek to lengthen the amount of time humanity exists. But more is not necessarily better.
It is entirely possible, and indeed easily conceivable, to see how our long-term potential could turn out to be quite negative. It may even be the case that we are at the Zenith of civilization now.
However, I will stop my objections there. Ord is undoubtedly aware of them, and after reading the book in its entirety, I have mostly come around to his way of thinking. The most persuasive argument in his favor is to think about what we currently do not know. There are important and serious long-term projects that simply require time, cancer research being perhaps the most obvious example. With enough time, we could unlock the secrets of the human mind, and the secrets of the universe, and proceed to build a civilization filled with happiness, equality, and meaning. The mere possibility of this is enough for us to start taking our future a bit more seriously. We should strive to protect it; not because survival is a basic human instinct, but because we have a mission to complete. To create a flourishing human race, one that transcends the limits of our consciousness today and is capable of experiencing states of mind that are unimaginable to us now.
The problem, and the central focus of this book, is that we are at grave danger of losing exactly that sort of future. One of the major risks he sees is biology, specifically Genome sequencing, gene drives, biological warfare. These are all examples where further research can go drastically wrong, especially if we lack the proper regulation and wisdom to continue pursuing these lines of inquiry. Ord cites a variety of different examples of how each of these areas of exploration could lead to disaster. Reading through them, I realized that they all resembled a common theme. Capitalism drives companies to grow at unhealthy rates, and, given the profitability of technology that also happens to be extremely dangerous, we are exponentially increasing our amount of existential risk.
These biological risks represent only one major category of our total existential risk. What is of much greater concern to Ord is the development of Artificial General Intelligence (AGI). AGI is a topic of great interest for me, and an area I hope to one day pursue professionally. Unfortunately, Ord (and many others) are worried that such a development would ruin us. The argument is fairly straightforward. AI systems require a cost function, a numeric way to measure performance. They “learn” by experimenting, and they get feedback on their performance through a cost function. However, what would be the cost function for human behavior? The key idea, that Ord makes clear, is that human values are just too complex. It is impossible (at least for now) to build an AI that could realistically make decisions in a way that is agreeable to us because such we are incapable of putting a single number to represent the total cost or benefit of every action.
I think there is still hope, despite these limitations. While a single cost function might be too simplistic for human behavior, what about 10? 100? 10000? We could run a secondary optimization algorithm on top of these cost functions, potentially recursively, to create more and more complex machines. It may become exceedingly difficult to understand how they work, but it could in theory be done. More pressing, I think that ultimately AGI will actually be required to solve the very risks that Ord is concerned about. We are utterly lacking in the wisdom to tackle these future risks. And it does not seem likely that our rate of technological progress will slow in the upcoming years. Thus, we need a shortcut, a way to get the necessary solutions to these issues without slowing down time. AGI would serve as that shortcut.
I will close out the discussion of the book on two points. First, while I agree with Ord’s general sentiment, I think he is overly cautious. He proposes slowing down technology, setting more safeguards into place, and having more discussions. I agree with these moves, but it is also important to take serious action. It is true that if we do decide to take action, such as habitating another planet, this may then close off or open doors for us as a species, possibly limiting our full potential. Thus, it would be important to reflect deeply on what a correct course of action would look like. The flipside, however, is that if we wait too long, those doors will be closed for us. Natural constraints may prevent us from leaving, resources may dry up, public sentiment may change. Second, I think that despite all of the reasons Ord puts forward, the only reason we have to take seriously the idea of existential risk, is our capacity to engineer the human brain.
Pushing our limits of intelligence, creativity, emotion, are the only things that would be worth saving time for. The mere survival of the human race is not important, only that we achieve great subjective experiences. What would be the point of never becoming extinct, if every one of us is completely miserable? Of course, the human race would still be important to maintain, even if we could not engineer the human brain; after all the world is far from perfect, and a perfect world would, in theory, lead to more happiness than the one we live in now. However, that would mean that we should focus on perfecting our world, rather than thinking about these long-term existential risks. If we discover any physical, biological barrier to our ability to seriously rewire the human brain, then I am not quite sure what the value would be in planning out decade long trips to other planets and galaxies only to keep our civilization intact.
I see great value in everything humans do, and I agree with Ord that we hold a special place among the stars, as possibly the only beings to continue life. However, what makes our life so great is our subjective experience, and that is what we should seek at all costs to protect and build.