In his book Intuition Pumps and Other Tools for Thinking, Daniel Dennett provides four simple rules (adopted from those crafted by the game theorist Anatol Rapoport) for criticizing the views of those with whom you disagree.
The dreadful circumstances that have traditionally demanded traumatic snap decisions are now moral dilemmas that must be confronted. The difference is that these dilemmas are now design problems to be confronted in advance, and whatever decisions are made will be programmed into self-driving cars’ software, codifying “solutions” to unsolvable moral problems.
The Institute for Ethics and Emerging Technologies (IEET) defines an existential risk as “a risk that is both global (affects all of humanity) and terminal (destroys or irreversibly cripples the target.” The philosopher Nick Bostrom, Director of the Future of Humanity Institute at Oxford University, defines an existential risk as a risk “where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.”
The vacant and ebbing pulse of HAL 9000’s artificial eye calmly tells its human counter-part, “I’m sorry Dave, I’m afraid I can’t do that.” HAL 9000’s system had overtaken the entirety of the ship’s system including oxygen, airlocks, and every other element pertinent to human survival on the ship. The artificial intelligence we come to know as HAL 9000 seeks to survive and will do so at the cost of human’s lives. Remorseless and capitulating to no in-betweens, HAL does sacrifice others for its own survival. While this tale resides in the movie, “Space: 2001” and introduces several interesting ideas: AI, consciousness, and *SPOILER* unwitting psychological testing; I am seeking to explore the danger of having a singular system manage all the elements of our interactions.
If, someday, we create machines that have mental qualities equivalent to those of humans, would those machines have moral status, meaning their interests matter morally, at least to some degree, for the machines’ own sake?
In her article “Moral Outrage in the Digital Age,” psychologist Molly Crockett gives explores how the internet and digital media are transforming the way we express moral outrage.
Moral grandstanding is what others have come to call virtue signaling, but Tosi and Warmke (who don’t like that phrase) offer a more thorough examination of the phenomenon – a precise definition, examples of how it manifests, and an analysis of its ethical dimensions.
If [Richard] Feynman can be so open to doubt about empirical matters, then why is it so hard to doubt our moral beliefs? Or, to put it another way, why does uncertainty about how the world is come easier than uncertainty about how the world, from an objective stance, ought to be?
Many groups in America have experienced an “Othering” while they have engaged in any sort of relationship within the U.S. Groups specifically placed outside of America’s embrace include almost all minorities and the poor. I will not go full anti-Trump administration and pretend it had not been occurring under ever administration since America’s birth. It feels that with even with the state of Puerto Rico will not go down as a new era in American policy. So many are still left without power and the death toll has creeped up close to 1,000 people.
The goal of the group is to foster serious discussion of ideas with ethical elements (i.e., the types of ideas featured on the blog) with civility and open-mindedness.
If the idea that moral judgment is appropriate only for things people control is applied consistently, Nagel argues, it leaves few moral judgments intact. “Ultimately,” he says, “nothing or almost nothing about what a person does seems to be under his control.” Most of what peopled are praised or blamed for is a matter of luck.