Unless you’re a complete Luddite, you probably agree that technological progress has been largely beneficial to humanity. If not, then close your web browser, unplug your refrigerator, cancel your doctor’s appointment, and throw away your car keys.
For hundreds of thousands of years, we’ve been developing tools to help us live more comfortably, be more productive, and overcome our biological limitations. There has always been opposition to certain spheres of technological progress (e.g., the real Luddites), and there will likely always be be opposition to specific technologies and applications (e.g., human cloning and genetically modified organisms), but it’s hard to imagine someone today who is sincerely opposed to technological progress in principle.
Despite technology’s benefits, however, it also comes with risks. The ubiquity of cars puts us at risk for crashes. The development of new drugs and medical treatments puts us at risk for adverse reactions, including death. The integration of the internet into more and more areas of our lives puts us at greater risk for privacy breaches and identity theft. Virtually no technology is risk free.
But there’s a category of risk associated with technology that is much more significant: existential risk. The Institute for Ethics and Emerging Technologies (IEET) defines existential risk as “a risk that is both global (affects all of humanity) and terminal (destroys or irreversibly cripples the target.” The philosopher Nick Bostrom, Director of the Future of Humanity Institute at Oxford University, defines it as a risk “where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.”
As IEET points out, there are quite a few existential risks already present in the universe – gamma ray bursts, huge asteroids, supervolcanoes, and extraterrestrial life (if it exists). But the human quest to tame the universe for humanity’s own ends has created new ones.
Anthropogenic Existential Risks
The Future of Life Institute lists the following man-made existential risks.
A nuclear weapon hasn’t been used since the United States bombed Hiroshima and Nagasaki in World War II, and the global nuclear stockpile has been reduced by 75% since the end of the cold war. But there are still enough warheads in existence to destroy humanity. Should a global nuclear war break out, a large percentage of the human population would be killed, and the nuclear winter that would follow would kill most of the rest.
The scientific consensus is that human activities are the cause of the rising global average temperatures. And as temperatures rise, extreme storms, droughts, floods, more intense heat waves and other negative effects will become more common. These effects in themselves are unlikely to pose an existential risk, but the chaos they may induce could. Food, water, and housing shortages could lead to pandemics and other devastation. They could also engender economic instabilities, increasing the likelihood of both conventional and nuclear war.
It remains to be seen whether a superintelligent machine or system will ever be created. It’s also an open question whether such artificial superintelligence would, should it ever be achieved, be bad for humanity. Some people theorize that it’s all but guaranteed to be an overwhelmingly positive development. There are, however, at least two ways in which artificial intelligence could pose an existential risk.
For one, it could be programmed to kill humans. Autonomous weapons are already being developed and deployed, and there is a risk that as they become more advanced, they could escape human control, leading to an AI war with catastrophic human casualty levels. Another risk is that we create artificial intelligence for benevolent purposes but fail to fully align its goals with our own. We may, for example, program a superintelligent system to undertake a large-scale geoengineering project but fail to appreciate the creative, yet destructive, ways it will accomplish its goals. In its quest to efficiently complete its project, the superintelligent system might destroy our ecosystem, and us when we attempt to stop it.
The promises of biotechnology are undeniable, but advances also present significant dangers. Genetic modification of organisms (e.g. gene drives) could profoundly affect existing ecosystems if proper precautions aren’t taken. Further, genetically modifying humans could have extremely negative unforeseen consequences. And perhaps most unnerving is the possibility of a lethal pathogen escaping the lab and spreading across the globe. Scientists engineer very dangerous pathogens in order to learn about and control those that occur naturally. This type of research is done in highly secure laboratories with many levels of controls, but there’s still the risk, however slight, of an accidental release. And the technology and understanding are rapidly becoming cheaper and more widespread, so there’s a growing risk that a malevolent group could “weaponize” and deploy deadly pathogens.
Are We Doomed?
These doomsday scenarios are by no means inevitable, but they should be taken seriously. The devastating potential of climate change is pretty well understood, and it’s up to us to do what we can to mitigate it. The technology to bomb ourselves into oblivion has been around for almost 75 years. Whether we end up doing that depends on an array of geopolitical factors, but the only way to ensure that we don’t is to achieve complete global disarmament.
Many of the risks associated with artificial intelligence and biotechnology are contingent upon technologies that have yet to fully manifest. But the train is already moving, and it’s not going to stop, so it’s up us to make sure it doesn’t veer down the track toward doom. As the Future of Life Institute puts it, “We humans should not ask what will happen in the future as if we were passive bystanders, when we in fact have the power to shape our own destiny.”