Morbid Futurism: Man-Made Existential Risks

Unless you’re a complete Luddite, you probably agree that technological progress has been largely beneficial to humanity. If not, then close your web browser, unplug your refrigerator, cancel your doctor’s appointment, and throw away your car keys.

For hundreds of thousands of years, we’ve been developing tools to help us live more comfortably, be more productive, and overcome our biological limitations. There has always been opposition to certain spheres of technological progress (e.g., the real Luddites), and there will likely always be be opposition to specific technologies and applications (e.g., human cloning and genetically modified organisms), but it’s hard to imagine someone today who is sincerely opposed to technological progress in principle.

Despite technology’s benefits, however, it also comes with risks. The ubiquity of cars puts us at risk for crashes. The development of new drugs and medical treatments puts us at risk for adverse reactions, including death. The integration of the internet into more and more areas of our lives puts us at greater risk for privacy breaches and identity theft. Virtually no technology is risk free.

But there’s a category of risk associated with technology that is much more significant: existential risk. The Institute for Ethics and Emerging Technologies (IEET) defines existential risk as “a risk that is both global (affects all of humanity) and terminal (destroys or irreversibly cripples the target.” The philosopher Nick Bostrom, Director of the Future of Humanity Institute at Oxford University, defines it as a risk “where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.”

As IEET points out, there are quite a few existential risks already present in the universe – gamma ray bursts, huge asteroids, supervolcanoes, and extraterrestrial life (if it exists). But the human quest to tame the universe for humanity’s own ends has created new ones.

Anthropogenic Existential Risks

The Future of Life Institute lists the following man-made existential risks.

Nuclear Annihilation

A nuclear weapon hasn’t been used since the United States bombed Hiroshima and Nagasaki in World War II, and the global nuclear stockpile has been reduced by 75% since the end of the cold war. But there are still enough warheads in existence to destroy humanity. Should a global nuclear war break out, a large percentage of the human population would be killed, and the nuclear winter that would follow would kill most of the rest.

Catastrophic Climate Change

The scientific consensus is that human activities are the cause of the rising global average temperatures. And as temperatures rise, extreme storms, droughts, floods, more intense heat waves and other negative effects will become more common. These effects in themselves are unlikely to pose an existential risk, but the chaos they may induce could. Food, water, and housing shortages could lead to pandemics and other devastation. They could also engender economic instabilities, increasing the likelihood of both conventional and nuclear war.

Artificial Intelligence Takeover

It remains to be seen whether a superintelligent machine or system will ever be created. It’s also an open question whether such artificial superintelligence would, should it ever be achieved, be bad for humanity. Some people theorize that it’s all but guaranteed to be an overwhelmingly positive development. There are, however, at least two ways in which artificial intelligence could pose an existential risk.

For one, it could be programmed to kill humans. Autonomous weapons are already being developed and deployed, and there is a risk that as they become more advanced, they could escape human control, leading to an AI war with catastrophic human casualty levels. Another risk is that we create artificial intelligence for benevolent purposes but fail to fully align its goals with our own. We may, for example, program a superintelligent system to undertake a large-scale geoengineering project but fail to appreciate the creative, yet destructive, ways it will accomplish its goals. In its quest to efficiently complete its project, the superintelligent system might destroy our ecosystem, and us when we attempt to stop it.

Out-of-Control Biotechnology

The promises of biotechnology are undeniable, but advances also present significant dangers. Genetic modification of organisms (e.g. gene drives) could profoundly affect existing ecosystems if proper precautions aren’t taken. Further, genetically modifying humans could have extremely negative unforeseen consequences. And perhaps most unnerving is the possibility of a lethal pathogen escaping the lab and spreading across the globe. Scientists engineer very dangerous pathogens in order to learn about and control those that occur naturally. This type of research is done in highly secure laboratories with many levels of controls, but there’s still the risk, however slight, of an accidental release. And the technology and understanding are rapidly becoming cheaper and more widespread, so there’s a growing risk that a malevolent group could “weaponize” and deploy deadly pathogens.

Are We Doomed?

These doomsday scenarios are by no means inevitable, but they should be taken seriously. The devastating potential of climate change is pretty well understood, and it’s up to us to do what we can to mitigate it. The technology to bomb ourselves into oblivion has been around for almost 75 years. Whether we end up doing that depends on an array of geopolitical factors, but the only way to ensure that we don’t is to achieve complete global disarmament.

Many of the risks associated with artificial intelligence and biotechnology are contingent upon technologies that have yet to fully manifest. But the train is already moving, and it’s not going to stop, so it’s up us to make sure it doesn’t veer down the track toward doom. As the Future of Life Institute puts it, “We humans should not ask what will happen in the future as if we were passive bystanders, when we in fact have the power to shape our own destiny.”

What Would Make Machines Matter Morally?

Imagine that you’re out jogging on the trails in a public park and you come across a young man sitting just off the path, tears rolling down his bruised pale face, his leg clearly broken. You ask him some questions, but you can’t decode his incoherent murmurs. Something bad happened to him. You don’t know what, but you know he needs help. He’s a fellow human being, and, for moral reasons, you ought to help him, right?

Now imagine the same scenario, except it’s the year 2150. When you lean down to inspect the man, you notice a small label just under his collar: “AHC Technologies 4288900012.” It’s not a man at all. It looks like one, but it’s a machine. Artificial intelligence (AI) is so advanced in 2150, though, that machines of this kind have mental experiences that are indistinguishable from those of humans. This machine is suffering, and its suffering is qualitatively the same as human suffering. Are there moral reasons to help the machine?

The Big Question

Let’s broaden the question a bit. If, someday, we create machines that have mental qualities equivalent to those of humans, would those machines have moral status, meaning their interests matter morally, at least to some degree, for the machines’ own sake?

Writing in the National Review, Wesley Smith, a bioethicist and senior fellow at the Discovery Institute’s Center on Human Exceptionalism, answers the question with an definitive “nope.”

“Machines can’t ‘feel’ anything,” he writes. “They are inanimate. Whatever ‘behavior’ they might exhibit would be mere programming, albeit highly sophisticated.” In Smith’s view, no matter how sophisticated the machinery, it would still be mere software, and it would not have true conscious experiences. The notion of machines or computers having human-like experiences, such as empathy, love, or joy, is, according to Smith, plain nonsense.

Smith’s view is not unlike that expressed in the philosopher John Searle’s “Chinese Room Argument,” which denies the possibility of computers having conscious mental states. Searle equates any possible form of artificial intelligence to an English-speaking person locked in a room and given a batch of Chinese writing and instructions for deciphering it. Though the person is able to successfully convert the Chinese writing into English, he still has absolutely no understanding of the Chinese language. The same would be true for any instance of artificial intelligence, Searle argues. It may be able to process input and produce satisfactory output, but that doesn’t mean it has cognitive states. Rather, it merely simulates them.

But denying the possibility that machines will ever attain conscious mental states doesn’t answer the question of whether they would have moral status if they did. Besides, the impossibility of computers having mental lives is not a closed case. But Smith would rather not get bogged down by such “esoteric musing.” His test for determining whether an entity has moral status, based on his assertion that machines never can, is the following question: “Is it alive, e.g., is it an organism?”

But wouldn’t an artificially intelligent machine, if it did indeed have a conscious mental life, be alive, at least in the relevant sense? If it could suffer, feel joy, and have a sense of personal identity, wouldn’t it pass Smith’s test for aliveness. I’m assuming that by “organism” Smith means biological life form, but isn’t this an arbitrary requirement?

A Non-Non-Answer

In their entry on the ethics of artificial intelligence (AI) in the Cambridge Handbook of Artificial Intelligence, Nick Bostrom and Eliezer Yudkowsky grant that no current forms of artificial intelligence have moral status, but they take the question seriously and explore what would be required for them to have it. They highlight two commonly proposed criteria that are linked, in some way, to moral status:

  • Sentience – “the capacity for phenomenal experience or qualia, such as the capacity to feel pain and suffer”
  • Sapience – “a set of capacities associated with higher intelligence, such as self-awareness and being a reason-responsive agent”

If an AI system is sentient, that is, able to feel pain and suffer, but lacks the higher cognitive capacities required for sapience, it would have moral status similar to that of a mouse, Bostrom and Yudkowksy argue. It would be morally wrong to inflict pain on it absent morally compelling reasons to do so (e.g., early stage medical research or an infestation that is spreading disease).

On the other hand, Bostrom and Yudkowsky argue, an AI system that has both sentience and sapience to the same degree that humans do would have moral status on par with that of humans. They base this assessment on what they call the principle of substrate non-discrimination: “if two beings have the same functionality and the same conscious experience, and differ only in the substrate of their implementation, then they have the same moral status.” To conclude otherwise, Bostrom and Yudkowsky claim, would be akin to racism because “substrate lacks fundamental moral significance in the same way and for the same reason as skin color does.”

So, according to Bostrom and Yudkowsky, it doesn’t matter if an entity isn’t a biological life form. As long as its sentience and sapience are adequately similar to that of humans, then there is no reason to conclude that a machine doesn’t have similar moral status. It is alive in the only sense that it is relevant.

Of course, Smith, although denying the premise that machines could have sentience and sapience, might nonetheless insist that should they achieve these characteristics, they still wouldn’t have moral status because they aren’t organic life forms. They are human-made entities, whose existence is due solely to human design and ingenuity and, as such, do not deserve humans’ moral consideration.

Bostrom and Yudkowsky propose a second moral principle that addresses this type of rejoinder. Their principle of ontogeny non-discrimination states that “if two beings have the same functionality and the same conscious experience, and differ only in how they came into existence, then they have the same moral status.” Bostrom and Yudkowsky point out that this principle is accepted widely in the case of humans today: “We do not believe that causal factors such as family planning, assisted delivery, in vitro fertilization, gamete selection, deliberate enhancement of maternal nutrition, etc. – which introduce an element of deliberate choice and design in the creation of human persons – have any necessary implications for the moral status of the progeny.”

Even those who are opposed to human reproductive cloning, Bostrom and Yudkowsky point out, generally accept that if a human clone is brought to term, it would have the same moral status as any other human infant. There is no reason, they maintain, that this line of reasoning shouldn’t be extended to machines with sentience and sapience. Hence, the principle of ontogeny non-discrimination.

So What?

We don’t know if artificially intelligent machines with human-like mental lives are in our future. But Bostrom and Yudkowsky make a good case that should machines join us in the realm of consciousness, they would be worthy of our moral consideration.

It may be a bit unnerving to contemplate the possibility that the moral relationships between humans aren’t uniquely human, but our widening of the moral circle over the years to include animals has been based on insights very similar to those offered by Bostrom and Yudkowsky. And even if we never create machines with moral status, it’s worth considering what it would take for machines to matter morally to us, if only because such consideration helps us appreciate why we matter to one another.