Always On: The Internet As Our New Court

Historically, there was a locale that those with wealth, power, and fame would congregate and be seen – the court. The court was a treacherous place for many because it could cause one to lose favor or standing but it could also garnish one’s reputation as being something more. One’s reputation could be etched into many people’s minds as a true master of wit or a charlatan. The internet has become the court of fools and a public space in which we display our mastery over it.

The court had several key limitations: locale, size, and the “banana phone” problem. The locale would limit the story’s reverberations by reducing how far and wide the story of greatness or preposterous in nature would travel. The fall of historic Empires and the rising of their borders insured that the limitation would be relegated to certain locales. Size specifies how many people could witness this person’s behavior in the court. The act of physically fitting in a space limits the ability for one’s story to directly impact people. Lastly, the “banana phone” problem wherein one person who may have witnessed someone’s actions/speech may embellish or downplay the occurrence. This miscommunication alters the worthiness of how the world is going to perceive the person. The process of mythmaking begins in the eyes of others and ends in the ears of others.

Now to modernity, seeking “virality” or the generation of memes has enabled a praise far louder than what was previously capable. The concept of memes comes from the Greek term, memetic or duplication. This concept was originally attributed to ideas that were easily transmitted to other people and would “latch” into their thinking process. This has generated into a field of study, Memetics, which seeks to study the traits by which information spreads. It has prompted a huge group of individuals debating over the sociological elements but I am seeking to discuss the impact on public perception.

Now that the internet has pervaded into almost every facet of our lives it has generated a new court and proceeded to intensify the memetic rate. This court eliminates some of the restrictions that previously existed. For instance, the locale has expanded to a digital terrain that only has bounds in the technology. No longer are words limited to the walls of a building. There are billions of people with access to the internet and are not restrained to the confines of the small physical court. Surprisingly, the “banana phone” problem or the misinterpretation problem does still exist. Often people can skew the information provided to suit their beliefs or agenda.

Regardless, let’s talk about the how the internet has promoted a new court. Those who maintain a savvy grasp and a penchant for wit will reap a strange world of internet prestige. There are generally two types of internet prestige: “shit-posting” and persuasion. The former seems to focus on the power of the internet to generate hilarity. The latter is a focus on presenting the facts in a way that that pulls others towards a cause. In the recent months, we have seen the utter failure of one group, NRA empowered individuals, and the mastery of the court by the Parkland shooting survivors.

The NRA has attempted to declare war against these students in various ways. By going on Fox News and speaking out at their own convention in order to say that mental health is the main cause for gun violence – particularly school shooting. Then came the generation of the internet to prove that the ability to “shit-post” and be politically persuasive don’t have to be separated. They drew everything up from main NRA spokeswoman Dana Loesche’s past where she sold bizarre beet infused supplements to pointing out the hypocrisy of the NRA convention being a gun free zone.

The NRA did not have a good showing in response. It has been mostly threats and tired talking points while the spotlight and favor of the court remains heavily with the youth. There is a mastery found in the young that has been lost by other groups. The NRA was already struggling with gaining the favor of the internet by making terrifying videos about how the “media” and Black Lives Matter movement was going to essentially kill your entire family. Between that they made some wonderful videos that are laughable about “liberal” media – including one where the NRA spokesman puts WHOLE lemons into a blender to make lemonade. Whole lemons. Rinds on and all.

The youth have successfully managed to pull the court out into the world. Recently, they had die-ins at Publix because Publix provided a donation to a pro-NRA representative that caused Publix to withhold giving any further donations. I believe their mastery of the court will only lead to more outward political actions. Though since the court can be fickle, we will see if it continues to translate to a success. Also, if the NRA does learn to use the internet better, they may gain some appeal. They are very far behind and one slip-up leads to backward trending for all involved. I can’t help but want them to fail. Their history from a small gun safety group to outright lobbyists of Death should be highlighted over and over again. With the adeptness of the youth, there is a good chance that they live a long time with the knowing glares warranted for jesters.

 

 

Rapoport’s Rules: Daniel Dennett on Critical Commentary

Do you want to learn how to utterly destroy SJWs in an argument? How about those fascist, right-wing Nazis?

Well, you’ll have to figure out how to do that on your own.

But if you’re one of those buzzkills who wants to give your argumentative opponent a fair shake, the philosopher Daniel Dennett has some advice. In his book Intuition Pumps and Other Tools for Thinking, he provides four simple rules (adopted from those crafted by the game theorist Anatol Rapoport) for criticizing the views of those with whom you disagree:

  1. You should attempt to re-express your target’s position so clearly, vividly, and fairly that your target says, ‘Thanks, I wish I’d thought of putting it that way.’

  2. You should list any points of agreement (especially if they are not matters of general or widespread agreement).

  3. You should mention anything you have learned from your target.

  4. Only then are you permitted to say so much as a word of rebuttal or criticism.

An immediate effect of taking such an approach, Dennett says, is that your opponents “will be a receptive audience for your criticism: you have already shown that you understand their positions as well as they do (you agree with them on some important matters and have even been persuaded by something they said).”

Dennett admits that not all opponents are worthy of such respectful treatment, but when they are it pays off to be as charitable as possible. “It is worth reminding yourself,” he says, “that a heroic attempt to find a defensible interpretation of an author, if it comes up empty, can be even more devastating than an angry hatchet job. I recommend it.”

And so do I.

Political Annihilation: An Examination

Jean Paul-Sartre entered a cafe and scanned for his friend, Pierre. He was usually hunkered in the middle of the cafe working diligently. Pierre wasn’t there. He glances over the bustling scene and in Sartre’s mind, he did not perceive any other minds or Beings in his effort to locate Pierre. All before him is negated besides the features that embody Pierre. The other people in the cafe and their desires, needs, and very existence are annihilated in Sartre’s pursuit of Pierre. It seems like a very intuitive event. When we are looking for something as mundane as our keys we can sift through a variety of items and both never “know” what they are or remember them upon forced recollection. We often mean no harm in our mental destruction; yet, harm is a consequence of our inner workings.

The state of existence is contingent upon the ability to interact with the world. Any human still retains the ability to interact with physical objects as a point of matter (in the scientific sense as atomic engagement) engaging matter. Yet, I am thoroughly convinced that that is not what causes one to exist as human.  Falling back to the concept of zoopolitical or that man’s essence is derived from its nature as a political animal is what provides our existence. Rather than simply being matter, we exist as beings that matter or at least try to.  I am contending our existence is inextricably linked with our political framework for either one’s betterment or detriment.  And so when we fall out of the political spectrum, we are inherently missing a portion of our existence.

I am going to extrapolate this occurrence into the political realm – where almost all of our meanings are forced to reside. Using Sartre as the jumping off point was to suggest that human beings, and any living thing, is wronged when we annihilate them in pursuit of other political means. Treating others as non-existent or non-sentient objects in pursuit of other minds is one of the most dangerous maneuvers possible. It generates immense harm to the structure that people are designed to rely on. A very recent example is the attack on Deferred Action for Childhood Arrivals (DACA) recipients in America. These individuals boldly stepped into the political sphere in order to secure their future. The pursuit of the future is one of the most important political prompts because it requires an acknowledgement of the past and its harms. The DACA recipients came forward despite America’s history of castigating and deporting individuals like them. Most importantly, pursuing the future requires immense trust that the risks are worth exposing oneself. This bravery is how one is able to stand forth and enter the political realm. However, as we have been made aware, the DACA recipients are now being brushed aside. And here is the crux: their sentiments, desires, their very future as political beings are being destroyed by representatives who are searching for their own Pierre. The denizens that matter to many representatives may not even exist or are a minority but the representatives choose to close off the DACA recipient’s future in their pursuit of those others. The majority of Americans wish for the DACA recipients to be permitted to stay in the US.

The representatives, mostly Republican, have been negligent in recognizing the DACA recipients as people meriting engagement or some are specifically hoping to punish them in order to appeal to the constituents that barely exist. The consequences will be felt by everyone until they become normalized. I have no doubt that their actions, or inaction, will weigh heavily on the minds of every soul who wishes to matter in the political sphere. These invisible people are to be marginalized time and time again until our system seeks to recognize in favor of annihilation.

Self-Driving Cars and Moral Dilemmas with No Solutions

If you do a Google search on the ethics of self-driving cars, you’ll find an abundance of essays and news articles, all very similar. You’ll be asked to imagine scenarios in which your brakes fail and you must decide which group of pedestrians to crash into and kill. You’ll learn about the “trolley problem,” a classic dilemma in moral philosophy that exposes tensions between our deeply held moral intuitions.

And you’ll likely come across the Moral Machine platform, a website designed by researchers at MIT to collect data on how people decide in trolley problem-like cases that could arise with self-driving cars — crash scenarios in which a destructive outcome is inevitable. In one scenario, for example, you must decide whether a self-driving car with sudden brake failure should plow through a man who’s illegally crossing the street or swerve into the other lane and take out a male athlete who is crossing legally.

Another scenario involves an elderly woman and a young girl crossing the road legally. You’re asked whether a self-driving car with failed brakes should continue on its path and kill the elderly woman and the girl or swerve into a concrete barrier, killing the passengers — a different elderly woman and a pregnant woman. And here’s one more. A young boy and a young girl are crossing legally over one lane of the road, and an elderly man and an elderly woman are crossing legally over the other lane of the road. Should the self-driving car keep straight and kill the two kids, or should it swerve and kill the elderly couple?

You get the sense from the Moral Machine project and popular press articles that although these inevitable-harm scenarios present very difficult design questions, the hardworking engineers, academics, and policymakers will eventually come up with satisfactory solutions. They have to, right? Self-driving cars are just around the corner.

There Are No Solutions

The problem with this optimism is that some possible scenarios, as rare as they may be, have no fully satisfactory solutions. They’re true moral dilemmas, meaning that no matter what one does, one has failed to meet some moral obligation.

A driver on a four-lane highway who swerves to miss four young children standing in one lane only to run over and kill an adult tying his shoe in the other could justify his actions according to the utilitarian calculation that he minimized harm (assuming those were the only options available to him). But he still actively turned his steering wheel and sentenced an independent party to death.

The driver had no good options. But this scenario is more of a tragedy than a moral dilemma. He acted spontaneously, almost instinctively, making no such moral calculation regarding who lives and who dies. Having had no time to deliberate, he may feel some guilt for what happened, but he’s unlikely to feel moral distress. There’s no moral dilemma here because there’s no decision maker.

But what if, as his car approached the children on the highway, he was somehow able to slow everything down and consciously decide what to do? He may well do exactly the same thing, and for defensible reasons (according to some perspectives), but in saving the lives of the four children, he could be taking a father away from other children. And he knows and appreciates this reality when he decides to spare the children. This is a moral dilemma.

Like the example above does, the prospect of self-driving cars introduces conscious deliberation to inevitable-harm crash scenarios. The dreadful circumstances that have traditionally demanded traumatic snap decisions are now moral dilemmas that must be confronted. The difference is that these dilemmas are now design problems to be confronted in advance, and whatever decisions are made will be programmed into self-driving cars’ software, codifying “solutions” to unsolvable moral problems.

Of course, if we want self-driving cars, then we have to accept that such decisions must be made. But whatever decisions are made — and integrated into our legal structure and permanently coded into the cars’ software — will not be true solutions. They will be faits accomplis. Over time, most people will accept them, and they will even appeal to existing moral principles to justify them. The decisions will have effectively killed the moral dilemmas, but by no means will they have solved them.

The Broader Moral Justification for Self-Driving Cars

The primary motive for developing self-driving cars is likely financial, and the primary driver of consumer demand is probably novelty and convenience, but there is a moral justification for favoring them over human-driven cars: overall, they will be far less deadly.

There will indeed be winners and losers in the rare moral dilemmas that self-driving cars face. And it is indeed a bit dizzying that they will be picked beforehand by decision makers that won’t even be present for the misfortune.

But there are already winners and losers in the inevitable-harm crash scenarios with humans at the wheel. Who the winners and losers turn out to be is a result more of the driver’s reflex than his conscious forethought, but somebody still dies and somebody still lives. So, the question seems to be whether this quasi-random selection is worth preserving at the cost of the lives that self-driving cars will save.

Probably not.

Morbid Futurism: Man-Made Existential Risks

Unless you’re a complete Luddite, you probably agree that technological progress has been largely beneficial to humanity. If not, then close your web browser, unplug your refrigerator, cancel your doctor’s appointment, and throw away your car keys.

For hundreds of thousands of years, we’ve been developing tools to help us live more comfortably, be more productive, and overcome our biological limitations. There has always been opposition to certain spheres of technological progress (e.g., the real Luddites), and there will likely always be be opposition to specific technologies and applications (e.g., human cloning and genetically modified organisms), but it’s hard to imagine someone today who is sincerely opposed to technological progress in principle.

Despite technology’s benefits, however, it also comes with risks. The ubiquity of cars puts us at risk for crashes. The development of new drugs and medical treatments puts us at risk for adverse reactions, including death. The integration of the internet into more and more areas of our lives puts us at greater risk for privacy breaches and identity theft. Virtually no technology is risk free.

But there’s a category of risk associated with technology that is much more significant: existential risk. The Institute for Ethics and Emerging Technologies (IEET) defines existential risk as “a risk that is both global (affects all of humanity) and terminal (destroys or irreversibly cripples the target.” The philosopher Nick Bostrom, Director of the Future of Humanity Institute at Oxford University, defines it as a risk “where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.”

As IEET points out, there are quite a few existential risks already present in the universe – gamma ray bursts, huge asteroids, supervolcanoes, and extraterrestrial life (if it exists). But the human quest to tame the universe for humanity’s own ends has created new ones.

Anthropogenic Existential Risks

The Future of Life Institute lists the following man-made existential risks.

Nuclear Annihilation

A nuclear weapon hasn’t been used since the United States bombed Hiroshima and Nagasaki in World War II, and the global nuclear stockpile has been reduced by 75% since the end of the cold war. But there are still enough warheads in existence to destroy humanity. Should a global nuclear war break out, a large percentage of the human population would be killed, and the nuclear winter that would follow would kill most of the rest.

Catastrophic Climate Change

The scientific consensus is that human activities are the cause of the rising global average temperatures. And as temperatures rise, extreme storms, droughts, floods, more intense heat waves and other negative effects will become more common. These effects in themselves are unlikely to pose an existential risk, but the chaos they may induce could. Food, water, and housing shortages could lead to pandemics and other devastation. They could also engender economic instabilities, increasing the likelihood of both conventional and nuclear war.

Artificial Intelligence Takeover

It remains to be seen whether a superintelligent machine or system will ever be created. It’s also an open question whether such artificial superintelligence would, should it ever be achieved, be bad for humanity. Some people theorize that it’s all but guaranteed to be an overwhelmingly positive development. There are, however, at least two ways in which artificial intelligence could pose an existential risk.

For one, it could be programmed to kill humans. Autonomous weapons are already being developed and deployed, and there is a risk that as they become more advanced, they could escape human control, leading to an AI war with catastrophic human casualty levels. Another risk is that we create artificial intelligence for benevolent purposes but fail to fully align its goals with our own. We may, for example, program a superintelligent system to undertake a large-scale geoengineering project but fail to appreciate the creative, yet destructive, ways it will accomplish its goals. In its quest to efficiently complete its project, the superintelligent system might destroy our ecosystem, and us when we attempt to stop it.

Out-of-Control Biotechnology

The promises of biotechnology are undeniable, but advances also present significant dangers. Genetic modification of organisms (e.g. gene drives) could profoundly affect existing ecosystems if proper precautions aren’t taken. Further, genetically modifying humans could have extremely negative unforeseen consequences. And perhaps most unnerving is the possibility of a lethal pathogen escaping the lab and spreading across the globe. Scientists engineer very dangerous pathogens in order to learn about and control those that occur naturally. This type of research is done in highly secure laboratories with many levels of controls, but there’s still the risk, however slight, of an accidental release. And the technology and understanding are rapidly becoming cheaper and more widespread, so there’s a growing risk that a malevolent group could “weaponize” and deploy deadly pathogens.

Are We Doomed?

These doomsday scenarios are by no means inevitable, but they should be taken seriously. The devastating potential of climate change is pretty well understood, and it’s up to us to do what we can to mitigate it. The technology to bomb ourselves into oblivion has been around for almost 75 years. Whether we end up doing that depends on an array of geopolitical factors, but the only way to ensure that we don’t is to achieve complete global disarmament.

Many of the risks associated with artificial intelligence and biotechnology are contingent upon technologies that have yet to fully manifest. But the train is already moving, and it’s not going to stop, so it’s up us to make sure it doesn’t veer down the track toward doom. As the Future of Life Institute puts it, “We humans should not ask what will happen in the future as if we were passive bystanders, when we in fact have the power to shape our own destiny.”

On The Internet of Things

The vacant and ebbing pulse of HAL 9000’s artificial eye calmly tells its human counter-part, “I’m sorry Dave, I’m afraid I can’t do that.” HAL 9000’s system had overtaken the entirety of the ship’s system including oxygen, airlocks, and every other element pertinent to human survival on the ship. The artificial intelligence we come to know as HAL 9000 seeks to survive and will do so at the cost of human’s lives. Remorseless and capitulating to no in-betweens, HAL does sacrifice others for its own survival. While this tale resides in the movie, “Space: 2001” and introduces several interesting ideas: AI, consciousness, and *SPOILER* unwitting psychological testing; I am seeking to explore the danger of having a singular system manage all the elements of our interactions.

“SPACE: 2001” forewarns us by HAL 9000 altering the astronaut’s environment to a deadly effect. There is a long scoped comparison, for now, to the current environment where the Internet of Things (IoT) has become so widespread. At first, our system was solely concerned with systems like our computers. With the introduction of the modem, our computers began to intermingle with other systems. Hearing the sound of a modem communicating has become a nostalgic inducing event. Yet, there was much more control. Our modem’s required a phone line that would be unusable so many people were limited in when they could use it. Also, it was relatively expensive for a time and like so many other things that didn’t last. It reached into homes across America (think You’ve Got Mail!) and dropped dramatically in price so everyone could be connected. This lead to the “always on” connection methods where there was no window where the computer wasn’t connected to the network. This lead to a faster method and eventually to the introduction of wi-fi – which is actually a trademarked term to describe wireless compatible devices that can connect to the internet.

Wi-fi spread into every facility to accommodate the ability to be connected almost anywhere. We didn’t stop with personal computers (PCs) or laptops; instead, we chose to push further our connection abilities. Cell phones then became able to connect to the internet both through wi-fi and through the cellular data system. With this came an untethered freedom to access the internet and peruse so many posts about outrageous cats. As with most technology, this invention refused to stop progressing. In fact, it sped up. Wireless printers, wireless thermostats, wireless security systems, wireless microwaves and ovens and even wireless refrigerators all function within the IoT and few bat an eyelash at this. HAL showed the dangers of relying on one system for this but now we have opened up to the new “One” aka the denizens of the internet.

The internet exists as a plurality. An endless teeming mass of identities and avenues and, as a consequence, there exists bad characters to balance out the scales. Most people know only cursory skills needed to function on the internet – largely myself included.  But, below the surface of what many presume is a puddle filled with memes and thumbs exists numerous depths. While I don’t presume that many of us feel the tugging from these dark undercurrents, it is prudent to know of them and generate a form of caution.

This is not to downplay the existential threats provided by the rise of Artificial Intelligence, the more immediate concern are the human actors; agents who have access to the assemblage of networks that we are embedded within. Almost all individuals are providing access to ample information about themselves through their use of technology. Just recently top secret military locations were disclosed via fitness tracking apps that the soldiers used. The upper echelon of American protection is still vulnerable to the IoT that follows casual citizenry.

So let’s return to the idea of why the IoT is not an ideal thing. Those items I listed previously reside on a home wireless network and provide all kinds of information about the users present. Things like the thermostat are not really a good way to indicate if someone is home or provide access to private information. The refrigerator and security system on the other hand may enable anyone who accesses the wifi system to monitor comings and goings of said individual. Also, we expose ourselves to various forms of identity theft and cyberstalking through password theft. Almost every modern soul uses their computer to access e-mails, banking information, and social media but not many think of putting a secure enough password on their printer or oven. These definitely lack the terminal end that HAL engages in but there is one location where HAL’s malevolence can be felt: the car.

Newer vehicles are embedded with software that controls many of the car’s facets.  Hacking has occurred in various ways and by many groups: anything from the air conditioner, radio and even the brakes can be manipulated via the software. Connected vehicles are explicitly vulnerable and potentially fatal due the sheer panic that can occur once one realizes the car is out of one’s control. Auto manufacturers have consistently downplayed, over their entire history, various dangerous presented by their products. They are no different in regards to the dangers outlined here but the consequence is they have been alerted and have begun to think about how to remedy these concerns.

Another industry that should concern the public at large is the connection of medical devices. Pacemakers, insulin pumps, and deep brain stimulation devices are just some of the newer devices that we are connecting to the various networks. The ability to cause cardiac cessation, deliver a lethal dose of insulin, or turn off a device controlling tremors are very realistic concerns that will need to be consistently addressed. Every software update provides  new potential loopholes for individuals to take control of the devices or to piggy-back into the broader network.

What does this all mean? It means that we will likely have an event –whether personally or socially – that will demand an awareness. For some, it may have been the hacking of the election system by foreign powers in 2016. Others may only reflect upon their practices when it directly impacts them via a stolen identity or any other malicious event.  The dangers of a lockout perpetrated in space by a device is so very far off but the way to circumnavigate this problems is to function in a world where everything is  as secure as possible.

 

What Would Make Machines Matter Morally?

Imagine that you’re out jogging on the trails in a public park and you come across a young man sitting just off the path, tears rolling down his bruised pale face, his leg clearly broken. You ask him some questions, but you can’t decode his incoherent murmurs. Something bad happened to him. You don’t know what, but you know he needs help. He’s a fellow human being, and, for moral reasons, you ought to help him, right?

Now imagine the same scenario, except it’s the year 2150. When you lean down to inspect the man, you notice a small label just under his collar: “AHC Technologies 4288900012.” It’s not a man at all. It looks like one, but it’s a machine. Artificial intelligence (AI) is so advanced in 2150, though, that machines of this kind have mental experiences that are indistinguishable from those of humans. This machine is suffering, and its suffering is qualitatively the same as human suffering. Are there moral reasons to help the machine?

The Big Question

Let’s broaden the question a bit. If, someday, we create machines that have mental qualities equivalent to those of humans, would those machines have moral status, meaning their interests matter morally, at least to some degree, for the machines’ own sake?

Writing in the National Review, Wesley Smith, a bioethicist and senior fellow at the Discovery Institute’s Center on Human Exceptionalism, answers the question with an definitive “nope.”

“Machines can’t ‘feel’ anything,” he writes. “They are inanimate. Whatever ‘behavior’ they might exhibit would be mere programming, albeit highly sophisticated.” In Smith’s view, no matter how sophisticated the machinery, it would still be mere software, and it would not have true conscious experiences. The notion of machines or computers having human-like experiences, such as empathy, love, or joy, is, according to Smith, plain nonsense.

Smith’s view is not unlike that expressed in the philosopher John Searle’s “Chinese Room Argument,” which denies the possibility of computers having conscious mental states. Searle equates any possible form of artificial intelligence to an English-speaking person locked in a room and given a batch of Chinese writing and instructions for deciphering it. Though the person is able to successfully convert the Chinese writing into English, he still has absolutely no understanding of the Chinese language. The same would be true for any instance of artificial intelligence, Searle argues. It may be able to process input and produce satisfactory output, but that doesn’t mean it has cognitive states. Rather, it merely simulates them.

But denying the possibility that machines will ever attain conscious mental states doesn’t answer the question of whether they would have moral status if they did. Besides, the impossibility of computers having mental lives is not a closed case. But Smith would rather not get bogged down by such “esoteric musing.” His test for determining whether an entity has moral status, based on his assertion that machines never can, is the following question: “Is it alive, e.g., is it an organism?”

But wouldn’t an artificially intelligent machine, if it did indeed have a conscious mental life, be alive, at least in the relevant sense? If it could suffer, feel joy, and have a sense of personal identity, wouldn’t it pass Smith’s test for aliveness. I’m assuming that by “organism” Smith means biological life form, but isn’t this an arbitrary requirement?

A Non-Non-Answer

In their entry on the ethics of artificial intelligence (AI) in the Cambridge Handbook of Artificial Intelligence, Nick Bostrom and Eliezer Yudkowsky grant that no current forms of artificial intelligence have moral status, but they take the question seriously and explore what would be required for them to have it. They highlight two commonly proposed criteria that are linked, in some way, to moral status:

  • Sentience – “the capacity for phenomenal experience or qualia, such as the capacity to feel pain and suffer”
  • Sapience – “a set of capacities associated with higher intelligence, such as self-awareness and being a reason-responsive agent”

If an AI system is sentient, that is, able to feel pain and suffer, but lacks the higher cognitive capacities required for sapience, it would have moral status similar to that of a mouse, Bostrom and Yudkowksy argue. It would be morally wrong to inflict pain on it absent morally compelling reasons to do so (e.g., early stage medical research or an infestation that is spreading disease).

On the other hand, Bostrom and Yudkowsky argue, an AI system that has both sentience and sapience to the same degree that humans do would have moral status on par with that of humans. They base this assessment on what they call the principle of substrate non-discrimination: “if two beings have the same functionality and the same conscious experience, and differ only in the substrate of their implementation, then they have the same moral status.” To conclude otherwise, Bostrom and Yudkowsky claim, would be akin to racism because “substrate lacks fundamental moral significance in the same way and for the same reason as skin color does.”

So, according to Bostrom and Yudkowsky, it doesn’t matter if an entity isn’t a biological life form. As long as its sentience and sapience are adequately similar to that of humans, then there is no reason to conclude that a machine doesn’t have similar moral status. It is alive in the only sense that it is relevant.

Of course, Smith, although denying the premise that machines could have sentience and sapience, might nonetheless insist that should they achieve these characteristics, they still wouldn’t have moral status because they aren’t organic life forms. They are human-made entities, whose existence is due solely to human design and ingenuity and, as such, do not deserve humans’ moral consideration.

Bostrom and Yudkowsky propose a second moral principle that addresses this type of rejoinder. Their principle of ontogeny non-discrimination states that “if two beings have the same functionality and the same conscious experience, and differ only in how they came into existence, then they have the same moral status.” Bostrom and Yudkowsky point out that this principle is accepted widely in the case of humans today: “We do not believe that causal factors such as family planning, assisted delivery, in vitro fertilization, gamete selection, deliberate enhancement of maternal nutrition, etc. – which introduce an element of deliberate choice and design in the creation of human persons – have any necessary implications for the moral status of the progeny.”

Even those who are opposed to human reproductive cloning, Bostrom and Yudkowsky point out, generally accept that if a human clone is brought to term, it would have the same moral status as any other human infant. There is no reason, they maintain, that this line of reasoning shouldn’t be extended to machines with sentience and sapience. Hence, the principle of ontogeny non-discrimination.

So What?

We don’t know if artificially intelligent machines with human-like mental lives are in our future. But Bostrom and Yudkowsky make a good case that should machines join us in the realm of consciousness, they would be worthy of our moral consideration.

It may be a bit unnerving to contemplate the possibility that the moral relationships between humans aren’t uniquely human, but our widening of the moral circle over the years to include animals has been based on insights very similar to those offered by Bostrom and Yudkowsky. And even if we never create machines with moral status, it’s worth considering what it would take for machines to matter morally to us, if only because such consideration helps us appreciate why we matter to one another.