Self-Driving Cars and Moral Dilemmas with No Solutions

The dreadful circumstances that have traditionally demanded traumatic snap decisions are now moral dilemmas that must be confronted. The difference is that these dilemmas are now design problems to be confronted in advance, and whatever decisions are made will be programmed into self-driving cars’ software, codifying “solutions” to unsolvable moral problems.

Illustration of Earth on Fire

Morbid Futurism: Man-Made Existential Risks

The Institute for Ethics and Emerging Technologies (IEET) defines an existential risk as “a risk that is both global (affects all of humanity) and terminal (destroys or irreversibly cripples the target.” The philosopher Nick Bostrom, Director of the Future of Humanity Institute at Oxford University, defines an existential risk as a risk “where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.”

The Case for Moral Doubt

If [Richard] Feynman can be so open to doubt about empirical matters, then why is it so hard to doubt our moral beliefs? Or, to put it another way, why does uncertainty about how the world is come easier than uncertainty about how the world, from an objective stance, ought to be?

Thomas Nagel on Moral Luck

If the idea that moral judgment is appropriate only for things people control is applied consistently, Nagel argues, it leaves few moral judgments intact. “Ultimately,” he says, “nothing or almost nothing about what a person does seems to be under his control.” Most of what peopled are praised or blamed for is a matter of luck.