Keep An Eye on the Rearview

“History does not repeat, but it does instruct.”

That’s Yale historian Timothy Snyder’s message in his new book On Tyranny: Twenty Lessons from the Twentieth Century. If we can learn anything from the past, it’s that democracies can collapse. It happened in Europe in the 1920s and 1930s, and then again after the Soviets began spreading authoritarian communism in the 1940s.

American democracy is no less vulnerable to tyrannical forces, Snyder warns. “Americans today are no wiser than the Europeans who saw democracy yield to fascism, Nazism, or communism in the twentieth century. Our one advantage is that we might learn from their experience.”

The short 126-page treatise – written in reaction to Donald Trump’s ascension to the Oval Office – looks at failed European democracies to highlight the “deep sources of tyranny” and offer ways to resist them:

Do not obey in advance. Defend institutions. Believe in truth. Contribute to good causes. Listen for dangerous words. Be as courageous as you can.

Inspired as it may be by Trumpian politics, On Tyranny is useful guide for defending democratic governance in any political era. As Snyder notes, looking to past democracies’ failures is an American tradition. As they built our country, the Founding Fathers considered the ways ancient democracies and republics declined. “The good news is that we can draw upon more recent and relevant examples than ancient Greece and Rome,” Snyder writes. “The bad news is that the history of modern democracy is also one of decline and fall.”

The actionable steps for defending our democratic institutions make up the bulk of On Tyranny, but Snyder’s big critique is of Americans’ relationship to history. It’s not just that we don’t know any; it’s that we’re dangerously anti-historical.

In the epilogue, Snyder observes that until recently the politics of inevitability – “the sense that history could move in only one direction: toward liberal democracy” – dominated American political thinking. After communism in eastern Europe ended, and the salience of its, and fascism’s and Nazism’s, destruction lessened, the myth of the “end of history” took hold. This misguided belief in the march toward ever-greater progress made us vulnerable, Snyder writes, because it “opened the way for precisely the kinds of regimes we told ourselves could never return.”

Snyder isn’t being particularly shrewd here. Commentators have explicitly endorsed versions of the politics of inevitability. After the Cold War, the political theorist Francis Fukuyama wrote the aptly titled article “The End of History?” (which was later expanded into a book, The End of History and the Last Man) arguing that history may have indeed reached its final chapter. In the “End of History?” he writes the following:

What we may be witnessing is not just the end of the Cold War, or the passing of a particular period of post-war history, but the end of history as such: that is, the end point of mankind’s ideological evolution and the universalization of Western liberal democracy as the final form of human government.

This is a teleological conception of the world – it views history as unfolding according to some preordained end or purpose. Marxism had its own teleology, the inevitable rise of a socialist utopia, based on Karl Marx’s materialist rewriting of Hegelian dialectics. After Marxism took a geopolitical blow in the early 1990s, Fukuyama reclaimed Hegelianism from Marx and argued that mankind’s ideological evolution had reached its true end: Liberal democracy had triumphed, and no alternatives could possibly replace it.

But Snyder isn’t referring just to the prophecies of erudite theorists like Fukuyama. He’s pointing the finger at everyone. When the Soviet Union collapsed, he writes, Americans and other liberals “drew the wrong conclusion: Rather than rejecting teleologies, we imagined that our own story was true.” We fell into a “self-induced intellectual coma” that constrained our imaginations, that barred from consideration a future with anything but liberal democracy.

A more recent way of considering the past is the politics of eternity. It’s historically oriented, but it has a suspect relationship with historical facts, Snyder writes. It yearns for a nonexistent past; it exalts periods that were dreadful in reality. And it views everything through a lens of victimization. “Eternity politicians bring us the past as a vast misty courtyard of illegible monuments to national victimhood, all of them equally distant from the present, all of them equally accessible for manipulation. Every reference to the past seems to involve an attack by some external enemy upon the purity of the nation.”

National populists endorse the politics of eternity. They revere the era in which democracies seemed most threatened and their rivals, the Nazis and the Soviets, seemed unstoppable. Brexit advocates, the National Front in France, the leaders of Russia, Poland, and Hungary, and the current American president, Snyder points out, all want to go back to some past epoch they imagine having been great.

“In the politics of eternity,” Snyder writes, “the seduction by a mythicized past prevents us from thinking about possible futures.” And the emphasis on national victimhood dampens any urge to self-correct:

Since the nation is defined by its inherent virtue rather than by its future potential, politics becomes a discussion of good and evil rather than a discussion of possible solutions to real problems. Since the crisis is permanent, the sense of emergency is always present; planning for the future seems impossible or even disloyal. How can we even think of reform when the enemy is always at the gate?

If the politics of inevitability is like an intellectual coma, the politics of eternity is like hypnosis: We stare at the spinning vortex of cyclical myth until we fall into a trance – and then we do something shocking at someone else’s orders.

The risk of shifting from the politics of inevitability to the politics of eternity is real, Snyder writes. We’re in danger of passing “from a naïve and flawed sort of democratic republic to a confused and cynical sort of fascist oligarchy.” When the myth of inevitable progress is shattered, people will look for another way of making sense of the world, and the smoothest path is from inevitability to eternity. “If you once believed that everything turns out well in the end, you can be persuaded that nothing turns out well in the end.”

The only thing that stands in the way of these anti-historical orientations, Snyder says, is history itself. “To understand one moment is to see the possibility of being the cocreator of another. History permits us to be responsible: not for everything, but for something.”

 

You’re Probably Not as Ethical as You Think You Are?

Bounded Ethicality and Ethical Fading

What if someone made you an offer that would benefit you personally but would require you to violate your ethical standards? What if you thought you could get away with a fraudulent act that would help you in your career?

Most of us think we would do the right thing. We tend to think of ourselves as honest and ethical people. And we tend to think that, when confronted with a morally dubious situation, we would stand up for our convictions and do the right thing.

But research in the field of behavioral ethics says otherwise. Contrary to our delusions of impenetrable virtue, we are no saints.

We’re all capable of acting unethically, and we often do so without even realizing it.

In their book Blind Spots: Why We Fail to Do What’s Right and What to Do About It, Max Bazerman and Ann Tenbrusel highlight the unintentional, but predictable, cognitive processes that result in people acting unethically. They make no claims about what is or is not ethical. Rather, they explore the ethical “blind spots,” rooted in human psychology, that prevent people from acting according to their own ethical standards. The authors are business ethicists and they emphasize the organizational setting, but their insights certainly apply to ethical decision making more generally.

The two most important concepts they introduce in Blind Spots are “bounded ethicality” and “ethical fading.”

Bounded Ethicality is derived from the political scientist Herbert Simon’s theory of bounded rationality – the idea that when people make decisions, they aren’t perfectly rational benefit maximizers, as classical economics suggests. Instead of choosing a course of action that maximizes their benefit, people accept a less than optimal but still good enough solution. They “satisfice” (a combination of “satisfy” and “suffice”), to use Simon’s term.

They do this because they don’t have access to all the relevant information, and even if they did, their minds wouldn’t have the capacity to adequately process it all. Thus, human rationality is bounded by informational and cognitive constraints.

Similarly, bounded ethicality refers to the cognitive constraints that limit people’s ability to think and act ethically in certain situations. These constraints blind individuals to the moral implications of their decisions, and they allow them to act in ways that violate the ethical standards that they endorse upon deeper reflection.

So, just as people aren’t rational benefit maximizers, they’re not saintly moral maximizers either.

Check out this video about bounded ethicality from the Ethics Unwrapped program at University of Texas Austin at Austin:

Ethical Fading is a process that contributes to bounded ethicality. It happens when the ethical implications of a decision are unintentionally disregarded during the decision-making process. When ethical considerations are absent from the decision criteria, it’s easier for people to violate their ethical convictions because they don’t even realize they’re doing so.

For example, a CEO might frame something as just a “business decision” and decide based on what will lead to the highest profit margin. Obviously, the most profitable decision might not be the most ethically defensible one. It may endanger employees, harm the environment, or even be illegal. But these considerations probably won’t come to mind if he’s only looking at the bottom line. And if they’re absent from the decision process, he could make an ethically suspect decision without even realizing it.

Check out this video about ethical fading from the Ethics Unwrapped program at University of Texas Austin at Austin.

You’re Not a Saint, So What Should You Do?

Nudge yourself toward morality.

Bazerman and Tenbrusel recommend preparing for decisions in advance. Consider the motivations that are likely to influence you at the time of the decision and develop proactive strategies to reduce their influence. Pre-commitment strategies are highly effective. If someone publicly pre-commits to an ethical action, he’s more likely to follow through than if he doesn’t. Likewise, pre-commiting to an intended ethical decision and sharing it with an unbiased and ethical person makes someone more likely to make the ethical decision in the future.

During actual decision making, it is crucial to elevate your abstract ethical values to the forefront of the decision-making process. Bazerman and Tenbrusel point out that “rather than thinking about the immediate payoff of an unethical choice, thinking about the values and principles that you believe should guide the decision may give the ‘should’ self a fighting chance.” One strategy for inducing this type of reflection, they say, is to think about your eulogy and what you’d want to be written about the values and principles you lived by.

There’s also the “mom litmus test.” When tempted by a potentially unethical choice, ask yourself whether you’d be comfortable telling your mom (or dad or anyone else you truly respect) about the decision. Imagining your mom’s reaction is likely to bring abstract principles to mind, they contend.

Yet another strategy for evoking ethical values is to change the structure of the decision. According to Bazerman and Tenbrusel, people are more likely to make the ethical choice if they have the chance to evaluate more than one option at a time. In one study, “individuals who evaluated two options at a time – an improvement in air quality (the ‘should’ choice) and a commodity such as a printer (the ‘want’ choice) – were more likely to choose the option that maximized the public good.” When participants evaluated these options independently, however, they were more likely to choose the printer.

In another study, people decided between two political candidates, one of higher integrity and one who promised more jobs. The people who evaluated the candidates side by side were more likely to pick the higher integrity candidate. Those who evaluated them independently were more likely to pick the one who would provide the jobs.

Bazerman and Tenbrusel say this evidence suggests that reformulating an ethical quandary into a choice between two options, the ethical one and the unethical one, is helpful because it highlights “the fact that by choosing the unethical action, you are not choosing the ethical action.”

What Implications Do Bounded Ethicality and Ethical Fading Have for Moral Responsibility?

Bazerman and Tenbrusel don’t address this question directly. But the notion that our default mode of ethical decision making in some circumstances is bounded by psychological and situational constraints – influences we’re not consciously aware of that affect our ethical decision-making abilities – seems to be in tension with the idea that we are fully morally responsible for all our actions.

The profit-maximizing CEO, for example, might be seen by his friends and peers as virtuous, caring, and thoughtful. He might care about his community and the environment, and he might genuinely believe that it’s unethical to endanger them. Still, he might unintentionally disregard the moral implications of illegally dumping toxic waste in the town river, harming the environment and putting citizens’ health at risk.

This would be unethical, for sure, but how blameworthy is he if he had yet to read Blind Spots and instead relied on his default psychology to make the decision? If ethical blind spots are constitutive elements of the human psyche, are the unethical actions caused by those blinds spots as blameworthy as those that aren’t?

Either way, we can’t be certain that we’d have acted any differently in the same circumstances.

We’ll all fail the saint test at some point, but that doesn’t make us devils.

Learn More About Behavioral Ethics

Blind Spots: Why We Fail to Do What’s Right and What to Do About It

Ethicalsystems.org (Decision Making)

Ethics Unwrapped (Behavioral Ethics)

David Hume and Deriving an “Ought” from an “Is”

It seems easy to make an ethical argument against punching someone in the face. If you do it, you will physically harm the person. Therefore, you shouldn’t.

But the 18th century philosopher David Hume famously argued that inferences of this type – in which what we ought morally to do (not punch someone) is derived from non-moral states of affairs (punching him will hurt him) – are logically flawed. You cannot, according to Hume, derive an “ought” from an “is,” at least without a supporting “ought” premise. So, deciding that you ought not punch someone because it would harm him presupposes that causing harm is bad or immoral. This presupposition is good enough for most people.

But for Hume and those who subscribe to what is now commonly referred to as the “is-ought gap” or “Hume’s guillotine,” it is not enough.

Hume put the heads of preceding moral philosophers in his proverbial guillotine in Book III, Part I, Section I of his A Treatise of Human Nature. He wrote that every work of moral philosophy he had encountered proceeded from factual, non-moral observations about the world to moral conclusions – those that express what we ought or ought not do. The shift is imperceptible, but it is a significant blunder. “For as this ought, or ought not, expresses some new relation or affirmation, it is necessary that it should be observed and explained; and at the same time that a reason should be given, for what seems altogether inconceivable, how this new relation can be a deduction from others, which are entirely different from it.”

The blunder, according to Hume, is one of logic. Factual statements are logically different from moral statements, so no factual statements can, by themselves, entail what people morally ought to do. The “ought” statement expresses a new relation, to use Hume’s phrase, that isn’t supported by its purely factual premises. So, a moral judgment that is arrived at by way of facts alone is suspect.

The new, unexplained relation between moral judgments and solely factual premises is characteristic of the broader distinction between facts and values. Moral judgments are value judgments – not factual ones.

In the same way, judgments about the tastiness of particular foods are value judgments. And positive and negative assessments of foods are not logically entailed by just the facts about the foods.

If a cheese enthusiast describes all the facts he knows about cheese – like that it’s made from milk, cultured bacteria, and rennet – that wouldn’t be enough to convince someone that its delicious. Nor would a cheese hater’s listing all the same facts prove cheese is disgusting. Both the cheese lover and the cheese hater make an evaluation that isn’t governed strictly by the logical relations between the facts.

Despite the logical gap between “is” and “ought” statements, and the broader distinction between facts and values, Hume didn’t think moral judgments are hogwash. He just thought they come from sentiments or feelings rather than logical deductions. In Book III, Part I, Section II of the Treatise, he wrote, “Morality … is more properly felt than judged of; though this feeling or sentiment is commonly so soft and gentle, that we are apt to confound it with an idea, according to our common custom of taking all things for the same, which have any near resemblance to each other.”

So, Hume would most likely agree that punching someone in the face is wrong. But he’d say an argument against it is unnecessary, even mistaken. People feel the wrongness. They feel that one ought not punch another in the face – just like a punched person feels the pain.

Do Moral Facts Exist?

Virtually all non-psychopaths think murder is morally wrong. But what makes it so? Is the wrongness an objective fact, one that would exist no matter how people felt about it? Or does the wrongness of murder reside only in people’s minds, with no footing in objective reality?

The question falls in the branch of moral philosophy called metaethics. Instead of pondering topics that come up during everyday moral debate – such as whether a given action is right or wrong – metaethics is more abstract. It is concerned with the nature of morality itself. What do people mean when they say something is right or wrong? Where do moral values come from? When people make moral judgments, are they talking about objective facts or are they merely expressing their preferences?

So, the objectivity of murder’s wrongness depends on whether objective moral facts exist at all. And not all moral philosophers agree that they do.

On one side are the moral realists, who say there are moral facts and that these facts make people’s moral judgments either true or false. If it is a fact that murder is wrong, then a statement that it’s wrong would be true in the same way that saying the earth revolves around the sun would be true.

Moral antirealists hold the opposite. They say there are no moral facts and that moral judgments can’t be true or false like other judgments can.

Some argue that when people express moral judgements, they aren’t even intending to make a true statement about an action. They are simply expressing their disapproval. The philosopher A.J. Ayer popularized this perspective in his 1936 book Language, Truth, and Logic. He argued that when someone says, “Stealing money is wrong,” the predicate “is wrong” adds no factual content to the statement. Rather, it’s as if the person said, “Stealing money!!” with a tone of voice expressing disapproval.

Because moral statements are simply expressions of condemnation, Ayer said, there is no way to resolve moral disputes. “For in saying that a certain type of action is right or wrong, I am not making any factual statement . . . I am merely expressing moral sentiments. And the man who is ostensibly contradicting me is merely expressing his moral sentiments. So that there is plainly no sense in asking which of us is right.”

Other antirealists are on the realists’ side in thinking that moral discourse makes sense only if it assumes there actually are moral facts. But these antirealists – called “error theorists” – say the assumption is false. People do judge actions to be right or wrong in light of supposed moral facts, but they are mistaken – no moral facts exist. Thinking and acting as if they do is an error.

Moral Realism graphic

“The strongest argument for antirealism,” says Geoffrey Sayre-McCord, a philosopher at the University of North Carolina at Chapel Hill, “is to point out the difficulty of making good sense of what the moral facts would be like and how we would go about learning them.”

Scientists can peer through microscopes to learn facts about amoebas. Journalists can observe press conferences and report what was said. Toddlers can tell you that the animal on the sofa is a brown dog. The job of the moral realist is to show that there are moral facts on par with these readily-accepted types of non-moral facts.

Sayre-McCord, who considers himself a moral realist, says this is done best by thinking about what would have to be true for our moral thoughts to be true. This results in some sophisticated philosophical accounts, he says.

Justin McBrayer, a philosopher at Fort Lewis College, says the truth of moral claims can be evaluated by analogy to the ways non-moral truths are established. The same “epistemic norms” apply whether a moral claim or a non-moral claim is being defended. “Some arguments are good, and some are bad,” he says.

Most philosophers are moral realists, but there is a sizeable minority in the antirealist camp. In a 2009 survey of professional, PhD-level philosophers, 56% said they accepted or leaned toward moral realism, while 28% said they accepted or leaned toward moral anti-realism. Sixteen percent said they held some other position.

McBrayer and Sayre-McCord point out the lack of data on the general population’s views, but they both sense that the default position among non-philosophers is moral realism. People think, act, and speak as if there are objective moral facts. But since most have never considered the alternative, many have trouble when pressed to defend their views. “They have to stop and think about it,” McBrayer says.

Sayre-McCord says most people tend to back away from their commitment to moral realism when they’re challenged. “There is a tendency for people to be antirealists metaethically, but realists in practice.”

There is no doubt that how people think about morality affects their behavior, McBrayer says. Psychological research backs this up. In one study, researchers found that participants “primed” to think in realist terms were twice as likely to donate to a charity than participants primed to think in antirealist terms. In another study, researchers found that participants who read an antirealist argument were more likely to cheat in a raffle than those who read a realist argument.

Given these findings, even if murder’s wrongness is just a fiction, it’s hard to argue that it’s not a useful one.