The Institute for Ethics and Emerging Technologies (IEET) defines an existential risk as “a risk that is both global (affects all of humanity) and terminal (destroys or irreversibly cripples the target.” The philosopher Nick Bostrom, Director of the Future of Humanity Institute at Oxford University, defines an existential risk as a risk “where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.”
The vacant and ebbing pulse of HAL 9000’s artificial eye calmly tells its human counter-part, “I’m sorry Dave, I’m afraid I can’t do that.” HAL 9000’s system had overtaken the entirety of the ship’s system including oxygen, airlocks, and every other element pertinent to human survival on the ship. The artificial intelligence we come to know as HAL 9000 seeks to survive and will do so at the cost of human’s lives. Remorseless and capitulating to no in-betweens, HAL does sacrifice others for its own survival. While this tale resides in the movie, “Space: 2001” and introduces several interesting ideas: AI, consciousness, and *SPOILER* unwitting psychological testing; I am seeking to explore the danger of having a singular system manage all the elements of our interactions.
If, someday, we create machines that have mental qualities equivalent to those of humans, would those machines have moral status, meaning their interests matter morally, at least to some degree, for the machines’ own sake?