Artificially intelligent robot

What Would Make Machines Matter Morally?

Posted by

Imagine that you’re out jogging on the trails in a public park and you come across a young man sitting just off the path, tears rolling down his bruised pale face, his leg clearly broken. You ask him some questions, but you can’t decode his incoherent murmurs. Something bad happened to him. You don’t know what, but you know he needs help. He’s a fellow human being, and, for moral reasons, you ought to help him, right?

Now imagine the same scenario, except it’s the year 2150. When you lean down to inspect the man, you notice a small label just under his collar: “AHC Technologies 4288900012.” It’s not a man at all. It looks like one, but it’s a machine. Artificial intelligence (AI) is so advanced in 2150, though, that machines of this kind have mental experiences that are indistinguishable from those of humans. This machine is suffering, and its suffering is qualitatively the same as human suffering. Are there moral reasons to help the machine?

The Big Question

Let’s broaden the question a bit. If, someday, we create machines that have mental qualities equivalent to those of humans, would those machines have moral status, meaning their interests matter morally, at least to some degree, for the machines’ own sake?

Writing in the National Review, Wesley Smith, a bioethicist and senior fellow at the Discovery Institute’s Center on Human Exceptionalism, answers the question with an definitive “nope.”

“Machines can’t ‘feel’ anything,” he writes. “They are inanimate. Whatever ‘behavior’ they might exhibit would be mere programming, albeit highly sophisticated.” In Smith’s view, no matter how sophisticated the machinery, it would still be mere software, and it would not have true conscious experiences. The notion of machines or computers having human-like experiences, such as empathy, love, or joy, is, according to Smith, plain nonsense.

Smith’s view is not unlike that expressed in the philosopher John Searle’s “Chinese Room Argument,” which denies the possibility of computers having conscious mental states. Searle equates any possible form of artificial intelligence to an English-speaking person locked in a room and given a batch of Chinese writing and instructions for deciphering it. Though the person is able to successfully convert the Chinese writing into English, he still has absolutely no understanding of the Chinese language. The same would be true for any instance of artificial intelligence, Searle argues. It may be able to process input and produce satisfactory output, but that doesn’t mean it has cognitive states. Rather, it merely simulates them.

But denying the possibility that machines will ever attain conscious mental states doesn’t answer the question of whether they would have moral status if they did. Besides, the impossibility of computers having mental lives is not a closed case. But Smith would rather not get bogged down by such “esoteric musing.” His test for determining whether an entity has moral status, based on his assertion that machines never can, is the following question: “Is it alive, e.g., is it an organism?”

But wouldn’t an artificially intelligent machine, if it did indeed have a conscious mental life, be alive, at least in the relevant sense? If it could suffer, feel joy, and have a sense of personal identity, wouldn’t it pass Smith’s test for aliveness. I’m assuming that by “organism” Smith means biological life form, but isn’t this an arbitrary requirement?

A Non-Non-Answer

In their entry on the ethics of artificial intelligence (AI) in the Cambridge Handbook of Artificial Intelligence, Nick Bostrom and Eliezer Yudkowsky grant that no current forms of artificial intelligence have moral status, but they take the question seriously and explore what would be required for them to have it. They highlight two commonly proposed criteria that are linked, in some way, to moral status:

  • Sentience – “the capacity for phenomenal experience or qualia, such as the capacity to feel pain and suffer”
  • Sapience – “a set of capacities associated with higher intelligence, such as self-awareness and being a reason-responsive agent”

If an AI system is sentient, that is, able to feel pain and suffer, but lacks the higher cognitive capacities required for sapience, it would have moral status similar to that of a mouse, Bostrom and Yudkowksy argue. It would be morally wrong to inflict pain on it absent morally compelling reasons to do so (e.g., early stage medical research or an infestation that is spreading disease).

On the other hand, Bostrom and Yudkowsky argue, an AI system that has both sentience and sapience to the same degree that humans do would have moral status on par with that of humans. They base this assessment on what they call the principle of substrate non-discrimination: “if two beings have the same functionality and the same conscious experience, and differ only in the substrate of their implementation, then they have the same moral status.” To conclude otherwise, Bostrom and Yudkowsky claim, would be akin to racism because “substrate lacks fundamental moral significance in the same way and for the same reason as skin color does.”

So, according to Bostrom and Yudkowsky, it doesn’t matter if an entity isn’t a biological life form. As long as its sentience and sapience are adequately similar to that of humans, then there is no reason to conclude that a machine doesn’t have similar moral status. It is alive in the only sense that it is relevant.

Of course, Smith, although denying the premise that machines could have sentience and sapience, might nonetheless insist that should they achieve these characteristics, they still wouldn’t have moral status because they aren’t organic life forms. They are human-made entities, whose existence is due solely to human design and ingenuity and, as such, do not deserve humans’ moral consideration.

Bostrom and Yudkowsky propose a second moral principle that addresses this type of rejoinder. Their principle of ontogeny non-discrimination states that “if two beings have the same functionality and the same conscious experience, and differ only in how they came into existence, then they have the same moral status.” Bostrom and Yudkowsky point out that this principle is accepted widely in the case of humans today: “We do not believe that causal factors such as family planning, assisted delivery, in vitro fertilization, gamete selection, deliberate enhancement of maternal nutrition, etc. – which introduce an element of deliberate choice and design in the creation of human persons – have any necessary implications for the moral status of the progeny.”

Even those who are opposed to human reproductive cloning, Bostrom and Yudkowsky point out, generally accept that if a human clone is brought to term, it would have the same moral status as any other human infant. There is no reason, they maintain, that this line of reasoning shouldn’t be extended to machines with sentience and sapience. Hence, the principle of ontogeny non-discrimination.

So What?

We don’t know if artificially intelligent machines with human-like mental lives are in our future. But Bostrom and Yudkowsky make a good case that should machines join us in the realm of consciousness, they would be worthy of our moral consideration.

It may be a bit unnerving to contemplate the possibility that the moral relationships between humans aren’t uniquely human, but our widening of the moral circle over the years to include animals has been based on insights very similar to those offered by Bostrom and Yudkowsky. And even if we never create machines with moral status, it’s worth considering what it would take for machines to matter morally to us, if only because such consideration helps us appreciate why we matter to one another.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.