The “Forgotten” Bioethicist

In the bioethics field, praise is heaped upon Beauchamp and Childress (B and C)for their guiding text, “The Principles of Biomedical Ethics.” They assuredly reap rewards by adding revisions to this book – however so minor.

Before I furthered my bioethics training, I encountered another ethicist W.D. Ross. I thoroughly enjoyed his book, “The Right and the Good” for its insights and attempts to generate a complete ethical theory. It was the most robust work I had encountered through my readings.

Ross was a Scottish philosopher who died in 1971. He is well-regarded in the academic institutions which makes it sensible that B and C encountered his seminal text. He attempted to make a sound ethical theory. That is, one that could assist in almost every ethical problem encountered.

“Fidelity; reparation; gratitude; non-maleficence; justice; beneficence; and self-improvement”

These are the derived “principles” that Ross used to create his ethical guidance.

B and C, clearly looked to these principles for guidance. Even directly pulling “justice” and “beneficence” from him for their book.

He even addresses the critical element called “moral residue”.  This is an instance in which the principles have taken you to their limit, where you leave doing the best you can. It is an essential action that you took but it leaves you dissatisfied. Ross openly admits that life functions like this.

Leaning too heavily on the “perfect” action would cripple many people’s decision making. Prescriptive ethics can be pleasant and neat at times. Yet, there is an absence in them too. Are they “true” moral dilemmas if they can be resolved in a quick formula? Life can certainly function as a serious of simple events. When it gets difficult, that is when the more robust systems work. That is where true ethics lives.

The most significant feature is the limit and humility found in Ross’s text. He acknowledges our innate inability to morally fail. He doesn’t seek to coddle our damaged ego after we fail. He acknowledges life’s messy and imprecise nature; compared to the approach of B and C, which is used to inform bioethics and the entire field of human subject research ethics, it clarifies the difficulty of doing our moral duties. Even that we may struggle morally and ethical resolution may never happen.

I am convinced that these ethical complications make for our most stimulating works of fiction since they don’t provide a simple solution. A classic example is “Sofie’s Choice” where she is told to choose between one of her children. The other is fated to a certain death. She chooses but never recovers from knowing the choice she made. Afterwards, her conscience is torn asunder for the remainder of her life.

There is only real takeaway is that ethics shines best when it digs into the nuance. When it says, we can’t provide panacea. You will struggle with your choices…….and that is expected.

Law and Morality

It’s hard to pin down how exactly the law relates to morality.

Some people believe that acting ethically means simply following the law. If it’s legal, then it’s ethical, they say. Of course, a moment’s reflection reveals this view to be preposterous. Lying and breaking promises are legal in many contexts, but they’re nearly universally regarded as unethical. Paying workers the legal minimum wage is legal, but failing to pay workers a living wage is seen by some as immoral. Abortion has been legal since the early 1970’s, but many people still thinks it’s immoral. And discrimination based on race used to be legal, but laws outlawing it were passed because it was deemed immoral.

Law and morality do not always coincide. Sometimes the legal action isn’t the ethical one. People who realize this might have a counter-mantra: Just because it’s legal doesn’t mean that it’s ethical. This is more a sophisticated perspective than the one simply conflating law and ethics.

According to this perspective, acting legally is not necessarily acting ethically, but it is necessary for acting ethically. The law is the minimum requirement, but morality may require people to go above and beyond their basic legal obligations. From this perspective, paying employees at least the minimum wage is necessary for acting ethically, but morality may require that they be paid enough to support themselves and their families. Similarly, non-discrimination may be the minimum requirement, but one might think that actively recruiting and integrating minorities into the workplace is a moral imperative.

The notion that legal behavior is a necessary condition for ethical behavior seems to be a good general rule. Most illegal acts are indeed unethical. But what about the old laws prohibiting the education of slaves? Or anti-miscegenation laws criminalizing interracial marriage, which were on the books in some US states until the 1960s? It’s hard to argue that people who broke these laws necessarily acted unethically.

You could say that these laws are themselves immoral and that this places them in a different category than generally accepted laws. This is probably true. These legal obligations do indeed create dubious moral commitments. But how can you say that the moral commitments are dubious if law and morality are intertwined to the extent that one can’t act ethically without acting legally?

And aren’t there some conditions under which breaking a generally accepted law might be illegal but still the right thing to do?

What about breaking a generally accepted law to save a life? What if a man, after exhausting all other options, stole a cancer treatment to save his wife’s life? The legal bases for property rights and the prohibition against theft are generally accepted, and in most other contexts, the man would be condemned as both unethical and a criminal. But stealing the treatment to save his wife’s life seems, at the very least, morally acceptable. This type of situation suggests a counter-mantra to those who believe legality is a prerequisite for ethicality: Just because it’s illegal doesn’t mean it’s unethical.

This counter-mantra doesn’t suggest that the law is irrelevant to ethics. Most of the time, it’s completely relevant. Good laws are, generally speaking, or perhaps ideally speaking, a codification of our morality.

But the connection between law and morality is complex, and there may be no general rule that captures how the two are relates.

Sometimes, actions that are perfectly legal are nonetheless unethical. Other times, morality requires that we not only follow the law but that we go above and beyond our positive legal obligations. Yet, there are also those times when breaking the law is at least morally permissible.

There are also cases in which we are morally obligated to follow immoral laws, such as when defiance would be considerably more harmful than compliance. We live in a pluralistic society where laws are created democratically, so we can’t just flout all the laws we think are immoral – morality is hardly ever that black and white anyway. And respect for the rule of law is necessary for the stability of our society, so there should be a pretty high threshold for determining that breaking a law is morally obligatory.

If there is a mantra that adequately describes the relationship between law and morality, it goes something like this: It depends on the circumstances. 

 

 

 

 

 

You’re Probably Not as Ethical as You Think You Are?

Bounded Ethicality and Ethical Fading

What if someone made you an offer that would benefit you personally but would require you to violate your ethical standards? What if you thought you could get away with a fraudulent act that would help you in your career?

Most of us think we would do the right thing. We tend to think of ourselves as honest and ethical people. And we tend to think that, when confronted with a morally dubious situation, we would stand up for our convictions and do the right thing.

But research in the field of behavioral ethics says otherwise. Contrary to our delusions of impenetrable virtue, we are no saints.

We’re all capable of acting unethically, and we often do so without even realizing it.

In their book Blind Spots: Why We Fail to Do What’s Right and What to Do About It, Max Bazerman and Ann Tenbrusel highlight the unintentional, but predictable, cognitive processes that result in people acting unethically. They make no claims about what is or is not ethical. Rather, they explore the ethical “blind spots,” rooted in human psychology, that prevent people from acting according to their own ethical standards. The authors are business ethicists and they emphasize the organizational setting, but their insights certainly apply to ethical decision making more generally.

The two most important concepts they introduce in Blind Spots are “bounded ethicality” and “ethical fading.”

Bounded Ethicality is derived from the political scientist Herbert Simon’s theory of bounded rationality – the idea that when people make decisions, they aren’t perfectly rational benefit maximizers, as classical economics suggests. Instead of choosing a course of action that maximizes their benefit, people accept a less than optimal but still good enough solution. They “satisfice” (a combination of “satisfy” and “suffice”), to use Simon’s term.

They do this because they don’t have access to all the relevant information, and even if they did, their minds wouldn’t have the capacity to adequately process it all. Thus, human rationality is bounded by informational and cognitive constraints.

Similarly, bounded ethicality refers to the cognitive constraints that limit people’s ability to think and act ethically in certain situations. These constraints blind individuals to the moral implications of their decisions, and they allow them to act in ways that violate the ethical standards that they endorse upon deeper reflection.

So, just as people aren’t rational benefit maximizers, they’re not saintly moral maximizers either.

Check out this video about bounded ethicality from the Ethics Unwrapped program at University of Texas Austin at Austin:

Ethical Fading is a process that contributes to bounded ethicality. It happens when the ethical implications of a decision are unintentionally disregarded during the decision-making process. When ethical considerations are absent from the decision criteria, it’s easier for people to violate their ethical convictions because they don’t even realize they’re doing so.

For example, a CEO might frame something as just a “business decision” and decide based on what will lead to the highest profit margin. Obviously, the most profitable decision might not be the most ethically defensible one. It may endanger employees, harm the environment, or even be illegal. But these considerations probably won’t come to mind if he’s only looking at the bottom line. And if they’re absent from the decision process, he could make an ethically suspect decision without even realizing it.

Check out this video about ethical fading from the Ethics Unwrapped program at University of Texas Austin at Austin.

You’re Not a Saint, So What Should You Do?

Nudge yourself toward morality.

Bazerman and Tenbrusel recommend preparing for decisions in advance. Consider the motivations that are likely to influence you at the time of the decision and develop proactive strategies to reduce their influence. Pre-commitment strategies are highly effective. If someone publicly pre-commits to an ethical action, he’s more likely to follow through than if he doesn’t. Likewise, pre-commiting to an intended ethical decision and sharing it with an unbiased and ethical person makes someone more likely to make the ethical decision in the future.

During actual decision making, it is crucial to elevate your abstract ethical values to the forefront of the decision-making process. Bazerman and Tenbrusel point out that “rather than thinking about the immediate payoff of an unethical choice, thinking about the values and principles that you believe should guide the decision may give the ‘should’ self a fighting chance.” One strategy for inducing this type of reflection, they say, is to think about your eulogy and what you’d want to be written about the values and principles you lived by.

There’s also the “mom litmus test.” When tempted by a potentially unethical choice, ask yourself whether you’d be comfortable telling your mom (or dad or anyone else you truly respect) about the decision. Imagining your mom’s reaction is likely to bring abstract principles to mind, they contend.

Yet another strategy for evoking ethical values is to change the structure of the decision. According to Bazerman and Tenbrusel, people are more likely to make the ethical choice if they have the chance to evaluate more than one option at a time. In one study, “individuals who evaluated two options at a time – an improvement in air quality (the ‘should’ choice) and a commodity such as a printer (the ‘want’ choice) – were more likely to choose the option that maximized the public good.” When participants evaluated these options independently, however, they were more likely to choose the printer.

In another study, people decided between two political candidates, one of higher integrity and one who promised more jobs. The people who evaluated the candidates side by side were more likely to pick the higher integrity candidate. Those who evaluated them independently were more likely to pick the one who would provide the jobs.

Bazerman and Tenbrusel say this evidence suggests that reformulating an ethical quandary into a choice between two options, the ethical one and the unethical one, is helpful because it highlights “the fact that by choosing the unethical action, you are not choosing the ethical action.”

What Implications Do Bounded Ethicality and Ethical Fading Have for Moral Responsibility?

Bazerman and Tenbrusel don’t address this question directly. But the notion that our default mode of ethical decision making in some circumstances is bounded by psychological and situational constraints – influences we’re not consciously aware of that affect our ethical decision-making abilities – seems to be in tension with the idea that we are fully morally responsible for all our actions.

The profit-maximizing CEO, for example, might be seen by his friends and peers as virtuous, caring, and thoughtful. He might care about his community and the environment, and he might genuinely believe that it’s unethical to endanger them. Still, he might unintentionally disregard the moral implications of illegally dumping toxic waste in the town river, harming the environment and putting citizens’ health at risk.

This would be unethical, for sure, but how blameworthy is he if he had yet to read Blind Spots and instead relied on his default psychology to make the decision? If ethical blind spots are constitutive elements of the human psyche, are the unethical actions caused by those blinds spots as blameworthy as those that aren’t?

Either way, we can’t be certain that we’d have acted any differently in the same circumstances.

We’ll all fail the saint test at some point, but that doesn’t make us devils.

Learn More About Behavioral Ethics

Blind Spots: Why We Fail to Do What’s Right and What to Do About It

Ethicalsystems.org (Decision Making)

Ethics Unwrapped (Behavioral Ethics)

David Hume and Deriving an “Ought” from an “Is”

It seems easy to make an ethical argument against punching someone in the face. If you do it, you will physically harm the person. Therefore, you shouldn’t.

But the 18th century philosopher David Hume famously argued that inferences of this type – in which what we ought morally to do (not punch someone) is derived from non-moral states of affairs (punching him will hurt him) – are logically flawed. You cannot, according to Hume, derive an “ought” from an “is,” at least without a supporting “ought” premise. So, deciding that you ought not punch someone because it would harm him presupposes that causing harm is bad or immoral. This presupposition is good enough for most people.

But for Hume and those who subscribe to what is now commonly referred to as the “is-ought gap” or “Hume’s guillotine,” it is not enough.

Hume put the heads of preceding moral philosophers in his proverbial guillotine in Book III, Part I, Section I of his A Treatise of Human Nature. He wrote that every work of moral philosophy he had encountered proceeded from factual, non-moral observations about the world to moral conclusions – those that express what we ought or ought not do. The shift is imperceptible, but it is a significant blunder. “For as this ought, or ought not, expresses some new relation or affirmation, it is necessary that it should be observed and explained; and at the same time that a reason should be given, for what seems altogether inconceivable, how this new relation can be a deduction from others, which are entirely different from it.”

The blunder, according to Hume, is one of logic. Factual statements are logically different from moral statements, so no factual statements can, by themselves, entail what people morally ought to do. The “ought” statement expresses a new relation, to use Hume’s phrase, that isn’t supported by its purely factual premises. So, a moral judgment that is arrived at by way of facts alone is suspect.

The new, unexplained relation between moral judgments and solely factual premises is characteristic of the broader distinction between facts and values. Moral judgments are value judgments – not factual ones.

In the same way, judgments about the tastiness of particular foods are value judgments. And positive and negative assessments of foods are not logically entailed by just the facts about the foods.

If a cheese enthusiast describes all the facts he knows about cheese – like that it’s made from milk, cultured bacteria, and rennet – that wouldn’t be enough to convince someone that its delicious. Nor would a cheese hater’s listing all the same facts prove cheese is disgusting. Both the cheese lover and the cheese hater make an evaluation that isn’t governed strictly by the logical relations between the facts.

Despite the logical gap between “is” and “ought” statements, and the broader distinction between facts and values, Hume didn’t think moral judgments are hogwash. He just thought they come from sentiments or feelings rather than logical deductions. In Book III, Part I, Section II of the Treatise, he wrote, “Morality … is more properly felt than judged of; though this feeling or sentiment is commonly so soft and gentle, that we are apt to confound it with an idea, according to our common custom of taking all things for the same, which have any near resemblance to each other.”

So, Hume would most likely agree that punching someone in the face is wrong. But he’d say an argument against it is unnecessary, even mistaken. People feel the wrongness. They feel that one ought not punch another in the face – just like a punched person feels the pain.

Do Moral Facts Exist?

Virtually all non-psychopaths think murder is morally wrong. But what makes it so? Is the wrongness an objective fact, one that would exist no matter how people felt about it? Or does the wrongness of murder reside only in people’s minds, with no footing in objective reality?

The question falls in the branch of moral philosophy called metaethics. Instead of pondering topics that come up during everyday moral debate – such as whether a given action is right or wrong – metaethics is more abstract. It is concerned with the nature of morality itself. What do people mean when they say something is right or wrong? Where do moral values come from? When people make moral judgments, are they talking about objective facts or are they merely expressing their preferences?

So, the objectivity of murder’s wrongness depends on whether objective moral facts exist at all. And not all moral philosophers agree that they do.

On one side are the moral realists, who say there are moral facts and that these facts make people’s moral judgments either true or false. If it is a fact that murder is wrong, then a statement that it’s wrong would be true in the same way that saying the earth revolves around the sun would be true.

Moral antirealists hold the opposite. They say there are no moral facts and that moral judgments can’t be true or false like other judgments can.

Some argue that when people express moral judgements, they aren’t even intending to make a true statement about an action. They are simply expressing their disapproval. The philosopher A.J. Ayer popularized this perspective in his 1936 book Language, Truth, and Logic. He argued that when someone says, “Stealing money is wrong,” the predicate “is wrong” adds no factual content to the statement. Rather, it’s as if the person said, “Stealing money!!” with a tone of voice expressing disapproval.

Because moral statements are simply expressions of condemnation, Ayer said, there is no way to resolve moral disputes. “For in saying that a certain type of action is right or wrong, I am not making any factual statement . . . I am merely expressing moral sentiments. And the man who is ostensibly contradicting me is merely expressing his moral sentiments. So that there is plainly no sense in asking which of us is right.”

Other antirealists are on the realists’ side in thinking that moral discourse makes sense only if it assumes there actually are moral facts. But these antirealists – called “error theorists” – say the assumption is false. People do judge actions to be right or wrong in light of supposed moral facts, but they are mistaken – no moral facts exist. Thinking and acting as if they do is an error.

Moral Realism graphic

“The strongest argument for antirealism,” says Geoffrey Sayre-McCord, a philosopher at the University of North Carolina at Chapel Hill, “is to point out the difficulty of making good sense of what the moral facts would be like and how we would go about learning them.”

Scientists can peer through microscopes to learn facts about amoebas. Journalists can observe press conferences and report what was said. Toddlers can tell you that the animal on the sofa is a brown dog. The job of the moral realist is to show that there are moral facts on par with these readily-accepted types of non-moral facts.

Sayre-McCord, who considers himself a moral realist, says this is done best by thinking about what would have to be true for our moral thoughts to be true. This results in some sophisticated philosophical accounts, he says.

Justin McBrayer, a philosopher at Fort Lewis College, says the truth of moral claims can be evaluated by analogy to the ways non-moral truths are established. The same “epistemic norms” apply whether a moral claim or a non-moral claim is being defended. “Some arguments are good, and some are bad,” he says.

Most philosophers are moral realists, but there is a sizeable minority in the antirealist camp. In a 2009 survey of professional, PhD-level philosophers, 56% said they accepted or leaned toward moral realism, while 28% said they accepted or leaned toward moral anti-realism. Sixteen percent said they held some other position.

McBrayer and Sayre-McCord point out the lack of data on the general population’s views, but they both sense that the default position among non-philosophers is moral realism. People think, act, and speak as if there are objective moral facts. But since most have never considered the alternative, many have trouble when pressed to defend their views. “They have to stop and think about it,” McBrayer says.

Sayre-McCord says most people tend to back away from their commitment to moral realism when they’re challenged. “There is a tendency for people to be antirealists metaethically, but realists in practice.”

There is no doubt that how people think about morality affects their behavior, McBrayer says. Psychological research backs this up. In one study, researchers found that participants “primed” to think in realist terms were twice as likely to donate to a charity than participants primed to think in antirealist terms. In another study, researchers found that participants who read an antirealist argument were more likely to cheat in a raffle than those who read a realist argument.

Given these findings, even if murder’s wrongness is just a fiction, it’s hard to argue that it’s not a useful one.