Political Life Contra Salvation

The seclusion of the church has become a tool that has dulled the political acumen and the weapon for the oppressed. There was a time when churches were the harbingers of social justice. Now they often prefer tax exempt status over empowering their congregants. To provide clarity, there was a time that churches fought loudly for the political favor in the black community. Leaders like MLK and Malcolm X stemmed from very politically active churches (*Nation of Islam is a quasi-church in my mind and now bears some strange resemblances to prosperity gospel groups). Their harm was inextricably linked with their ability to lift up others with them. Now, we get souls who laud themselves and their money. Prosperity gospel, that prayer and God chooses who should be rich, generates an inwardness that was not a mainstream approach to religion until the rise of televangelism. Now it haunts the religious community around every corner. Even to the point that yoga isn’t safe. Bikram yoga is an excruciating practice where you engage in yogic practice in a 100+ degrees Fahrenheit room. It spread like wildfire and though many are there for the exercise portion it also carries a religious engagement similar to prayer. Its founder is now on the lam from the law for sexual assault and tax evasion. He believed his own hype and had some outlandish words and beliefs.

Joel Osteen, the sole weasel to escape from Roger Rabbit’s world, has established an elaborate series of machinations to funnel money into his own pockets. His narrow skull filled with gleaning teeth can be seen at Wal-Marts across the US. Preaching an independence that suggests an inward and isolationist approach is his “shtick”. Essentially saying your problems are just God testing you, go further inward and you will work your own way out. Oh, and most importantly, give money to me because God wants me to thrive. The gall. The shamelessness. Yet, people give. Osteen has a net worth of about 40-60 million. He claims that there is no need to be ashamed about being rich. I guess when you are that rich, you can swallow the pride. Not someone who brings good into the world. Certainly not to the public at large. And, easy as it would be to criticize so many other elements, including his time working under Judge Doom, we must admonish him for the looking inward instead of engaging the injustices of this world. Not even willing to open his vast megachurch in Houston to help those suffering immediately after Hurricane Harvey ravaged the city. Yet, the power of shame can only sway Joel so much. He opened after the internet barraged him with criticism but he persisted on making excuses.

If I were to say what these congregations are good for, it would seemingly be only in their ability to further fleece the government for tax purposes.  Perhaps the inward turn from the churches at large was because they were simply afraid to lose their tax exempt status. If that is the case, then money has triumphed over salvation.

 

 

The Knobe Effect and the Intentionality of Side Effects

Imagine that the chairman of a company decides to implement an initiative that will reap profits for his company but will also have a particular side effect. The chairman knows the side effect will occur, but he couldn’t care less. Making money is his only reason for making his decision.

The chairman goes forward with the initiative, his company makes a lot of money, and the side effect occurs as anticipated.

Did the chairman bring about the side effect intentionally? Don’t answer yet.

In a famous 2003 experiment, the philosopher Joshua Knobe showed that people’s judgments about whether a side effect is intentional or not depend on what the side effect is. He randomly assigned subjects to read one of two scenarios, which were the same as the one above, except the actual side effects of the chairman’s decision were presented. In the first scenario, the initiative harms the environment. In the second, it helps the environment.

Of the subjects who read the scenario in which the initiative harmed the environment, 82% said the chairman intentionally brought about the harm. Subjects made the opposite judgment in the other condition. Asked whether the chairman intentionally helped the environment by undertaking the initiative, 77% said that he didn’t.

So, there is an asymmetry in the way we ascribe intentionality to side effects – now known as the “Knobe Effect” or the “Side Effect Effect” – and this study suggests that it stems from our moral evaluations of those side effects. As Knobe concluded in the 2003 study, people “seem considerably more willing to say that a side-effect was brought about intentionally when they regard that side-effect as bad than when they regard it as good.”

But subsequent research by Knobe and others has shown that it’s not that simple. Sometimes people judge good side effects, such as when an action violates an unjust Nazi law, as intentional and bad side effects, like complying with the Nazi law, as not intentional. Richard Holton argues that it is whether a norm is knowingly violated, and not necessarily whether a side effect is morally good or bad, that influences people’s judgments of intentionality.

You probably would’ve struggled to say whether the chairman intended the “particular side effect” because there was nothing to guide your intuitions. If there were, though, you likely would’ve exemplified the Knobe Effect.

The Liberty Bell’s Crack: Isaiah Berlin and Two Concepts of Liberty

Which Liberty?

It’s hard to find someone who is against liberty, but it’s easy to find disagreement about what the term “liberty” means.

Imagine a conspiracy theorist who is convinced that government agents are blasting mind-controlling waves into his apartment. To keep the government out of his head, he lines his walls, floor, and ceiling with aluminum foil. To be safe, he also lines his baseball cap with the foil and watches television from inside a foil-lined refrigerator box placed strategically in the center of his living room, as far away from the windows as possible. Is this man free?

In one sense, yes. No one is stopping him from protecting himself from non-existent waves by lining his home and himself with kitchen packaging. In another sense, no. The man’s behavior is exceptionally irrational. He is so divorced from reality that he can’t recognize and act in his true interests.

The man’s story highlights two dominant notions of liberty (or freedom, a term which is normally used interchangeably with liberty) that have occupied philosophers and others for centuries: negative liberty and positive liberty. He enjoys negative liberty because there is no external interference with his actions, but he lacks positive liberty because he lacks rational control over his own desires and actions.

The political philosopher Isaiah Berlin drew perhaps the most explicit distinction between positive and negative liberty in his famous essay “Two Concepts of Liberty.” But Berlin didn’t merely articulate the distinction between these two conceptions. He exposed a tension between them, arguing that positive liberty often perverts the concept of liberty so much that it doesn’t resemble liberty at all.

Negative Liberty

Negative liberty, according to Berlin, is “simply the area within which a man can act unobstructed by others.” The central obstacle to negative liberty is coercion, and a measurement of someone’s negative liberty is the degree to which he or she is free from coercion: “If I am prevented by others from doing what I could otherwise do, I am to that degree unfree; and if this area is contracted by other men beyond a certain minimum requirement, I can be described as being coerced, or, it may be, enslaved.”

Berlin is clear, however, that for interference with an individual’s activities to be coercion, the source of the interference must be human. He maintains that “[people] lack political liberty or freedom only if [they] are prevented from attaining a goal by human beings” and that “mere incapacity to obtain a goal is not lack of political freedom.”

Because negative liberty requires only that someone be free from coercion by other humans, it relies on a minimalist conception of human agency. There is no requirement that he possess certain internal capacities or values for him to be entitled to non-interference by others. Negative liberty presumes an entitlement to non-interference, and it imposes an obligation on everyone to refrain from obstructing others’ actions.

It is negative liberty, and the concomitant notion that one’s right to non-interference doesn’t depend upon the possession of other capacities, goals, or values, that the liberal philosopher John Stuart Mill defends in his book On Liberty:

His own good, either physical or moral, is not a sufficient warrant. He cannot rightfully be compelled to do or forbear because it will be better for him to do so, because it will make him happier, because, in the opinions of others, to do so would be wise, or even right.

Positive Liberty

Liberty in the positive sense, according to Berlin, is the freedom accompanied by being one’s own master. It represents freedom from “nature” or one’s “own ‘unbridled’ passions.” It involves, among other things, the “higher,” rational self achieving mastery over the lower self, the self that is dominated by irrational desires and impulses.

This idea of an individual having two selves, a rational, ideal self and an empirical self, is fundamental to positive liberty. Regarding the two selves that are inherent to this notion of liberty, Berlin says the following:

This dominant self is then variously identified with reason, with my ‘higher nature’, with the self which calculates and aims at what will satisfy it in the long run, with my ‘real’, or ‘ideal’, or ‘autonomous’ self, or with my self ‘at its best’; which is then contrasted with irrational impulse, uncontrolled desires, my ‘lower’ nature, the pursuit of immediate pleasures, my ‘empirical’ or ‘heteronomous’ self, swept by every gust of desire and passion, needing to be rigidly disciplined if it is ever to rise to the full height of its ‘real’ nature.

As you can see, fundamental to this version of liberty is a higher form of agency than that required by negative liberty. Positive liberty requires certain essential capacities or conditions, which may vary according to the particular form of positive liberty being endorsed, that are, by definition, required for an individual to be considered free. The common assumption underlying this line of thought, according to Berlin, is “that the rational ends of our ‘true’ natures must coincide, or be made to coincide, however violently our poor, ignorant, desire-ridden, passionate, empirical selves may cry out against this process. Freedom is not freedom to do what is irrational, or stupid, or wrong.”

The Relationship Between Postive and Negative Liberty

The danger of the notion of positive liberty, according to Berlin, is that it divides the individual into two selves: the true, or rational, self and the empirical self, which is subject to the irrational passions and desires that need to be controlled or contained. Once this metaphorical bifurcation of the self has occurred, he argues, the door is open to the infringement upon people’s empirical wishes and desires in the name of their ‘true’ selves – or their own freedom:

What, at most, this entails is that they would not resist me if they were rational and as wise as I and understood their interests as I do. But I may go on to claim a good deal more than this. I may declare that they are actually aiming at what in their benighted state they consciously resist, because there exists within them an occult entity – their latent rational will, or their true purpose – and this entity, although it is belied by all that they overtly feel and do and say, is their ‘real’ self, of which the poor empirical self in space and time may know nothing or little and that this inner spirit is the only self that deserves to have its wishes taken into account. Once I take this view, I am in a position to ignore the actual wishes of men or societies, to bully, oppress, torture them in the name, and on behalf, of their real selves, in the secure knowledge that whatever is the true goal of man (happiness, performance of duty, wisdom, a just society, self-fulfillment) must be identical with his freedom – the free choice of his ‘true,’ albeit often submerged and inarticulate, self.

In Isaiah Berlin: Liberty and Pluralism, George Crowder calls Berlin’s argument the “inversion thesis” because the idea is that the notion of positive liberty allows the concept of liberty to be inverted into its very opposite. Coercion can be justified under the rubric of positive liberty because it is purported to be more consistent with liberty than the individual’s actual wishes. Crowder points out that there is a strong undercurrent in Berlin’s thesis that the logic of positive liberty ought to make us suspicious because the idea itself exposes it to the potential for authoritarian corruption.

Berlin is not wholeheartedly against coercion for a person’s own good, however. He’s more worried about the imposition of certain philosophical ideals on someone in the name of his own freedom. The positively unfree person, with his “poor earthly body and foolish mind” might expressly reject what he is being coerced to do, but since his empirical body is not truly free, he is not really being coerced. Instead, his higher self has willed it, “not indeed consciously, not as he seems in everyday life, but in his role as a rational self which his empirical self may not know.”

What Berlin is criticizing is the view that the tension between someone’s expressed desires and a specific conception of his own good can be relieved by opining that coercion is permissible because, in the coerced person’s current empirical configuration, he is not free anyway. This is a distortion of what freedom is, according to Berlin. “Enough manipulation with the definition of man,” he says, “and freedom can be made to mean whatever the manipulator wishes.” As John Christman put it, for Berlin, “to label as ‘freedom’ the mastery of the ‘lower’ desires by the higher capacities of morality and virtue, not to mention by the supposedly superior wisdom of a general will, marked a treacherous tilt toward the justification of centralized power under the guise of moral superiority.”

Berlin’s ideas are not without their critics. In two follow-up posts, I discuss Gerald McCallum’s view that the distinction between positive and negative liberty can be collapsed and John Christman’s argument that positive liberty doesn’t necessarily open the door for authoritarianism.

This post was adapted from my bioethics master’s thesis: “The Moral Significance of Non-Autonomous Refusals of Medical Treatment.”

G.E. Moore and the Naturalistic Fallacy

Justifying Moral Values

Think about your most firmly held moral values. Now imagine that you have to justify them to the most inquisitive five-year-old conceivable.

If you believe, for example, that causing harm is wrong, why? Or if you think that maximizing happiness or pleasure is the right thing to do, what do you suppose makes it right? What makes fairness, respect, generosity, and truthfulness good? What does good (or bad or right or wrong) even mean?

You might see these questions as just the naïve ramblings of a moral novice, but can you answer them? Can they even be answered?

It’s hard to justify our moral values at the most basic level, and criticism of attempts to do so is not new.

In a previous post, I discussed David Hume’s view that what people ought morally to do can’t be inferred from factual, non-moral observations about the world. Hume’s view suggests that the foundations of our moral judgments rest on something other than logical deductions from non-moral states of affairs; for Hume, moral sentiments, rather than rationality, are what guide our moral judgments and actions.

The philosopher G.E. Moore, writing in the early 20th century, advances a similar – though not identical – criticism of the grounding of moral claims in non-moral observations, which Moore refers to as natural properties. In his book Principia Ethica, first published in 1903, Moore focuses on the nature of the fundamental moral concept good and how attempts to define it are confused. By good, Moore isn’t talking about whether anything in particular should be considered good but how the concept itself is to be defined.

The Naturalistic Fallacy and Defining Good

So, how should good be defined?

There is no shortage of possible definitions. Good is naturalness. Good is normalness. Good is virtuousness. Good is happiness. Good is pleasure. Good is fulfillment of duty.

Every single one of these is wrong, according to Moore, because good can’t be defined. And defining it in terms of natural properties, such as pleasure or happiness, is to commit what Moore calls the “naturalistic fallacy.”

It’s important to note that Moore isn’t saying that things that are pleasurable or natural or normal aren’t good or can’t be good. Many of them are good. He’s just saying that the property of goodness can’t be the same thing as pleasure, naturalness, normalness, or any other natural property, so any attempt to define it as such (e.g. “pleasure alone is good) is fallacious.

Trying to define good is like trying to define yellow. You can’t thoroughly explain it to anyone who doesn’t already know what it is. You can describe its physical equivalent, Moore says. “But a moment’s reflection is sufficient to shew that those light-vibrations are not themselves what we mean by yellow. They are not what we perceive …The most we are entitled to say of those vibrations is that they are what corresponds in space to the yellow which we actually perceive.”

Now think about the concept good. How do you explain it to someone who doesn’t already know what it means? You can’t, according to Moore because good, like yellow, is such a simple notion that it can’t be defined without referencing itself. Other things such as pleasure can contain the property good, but good can’t be reduced to pleasure or some other natural property. Good is simply good.

Moore contrasts good and yellow – simple concepts that can’t be broken down any further – with complex concepts, which can be. You can define a horse, for example, by listing its many different properties and qualities – it has four legs, hooved feet, and so on. But once you reduce it to its simplest terms, Moore says, those simple terms cannot be explained to anyone who doesn’t already know them. “They are simply something which you think of or perceive, and to any one who cannot think of or perceive or perceive them, you can never, by any definition, make their nature known.”

Like the simplest properties of a horse, good can’t be reduced to anything else, and trying to do so is a mistake. (Although Moore focuses on good, the basic idea – that moral properties can’t be reduced to natural properties – appears to apply to other moral properties, such as right, but Moore believed that the good was the ultimate end of ethical inquiry).

The Open Question Argument

To support his idea that good can’t merely be equated with natural properties, Moore proposes a thought experiment which has come to be known as the “open question argument”:

The hypothesis that disagreement about the meaning of good is disagreement with regard to the correct analysis of a given whole, may be most plainly seen to be incorrect by consideration of the fact that, whatever definition may be offered, it may always be asked, with significance, of the complex so defined, whether it is itself good.

For example, you may say that good means simply to promote the most overall happiness, and you may apply that definition to a particular question, such as whether it’s good to tax rich people at a much higher rate than poor people. And if you conclude that is indeed good, then you are thinking that it is one of those things that promotes the most overall happiness.

As plausible as your account may seem to you, the question “Is it good to promote the most overall happiness?” is still just as intelligible as the question “Is it good to tax rich people at a much higher rate?” It’s an open question. It can’t be settled in the same way that other definitional questions (e.g., “Is a bachelor married?) can. The notion that promoting the most overall happiness is alone what good means can always be doubted, “and the mere fact that we understand very well what is meant by doubting it, shews clearly that we have two different notions before our minds.”

Moore applies a similar line of reasoning to the idea that good is a meaningless concept that merely stands in for natural properties. For instance, if whatever is called good seems to always be pleasant, you might suppose that good and pleasant are the same thing. You might think that the statement “Pleasure is the good” isn’t referring to two distinct things, pleasure and goodness, but only to one – pleasure. Moore points out the flaw in this: “But whoever will attentively consider with himself what is actually before his mind when he asks the question ‘Is pleasure (or whatever it may be) after all good?’ can easily satisfy himself that he is not merely wondering whether pleasure is pleasant.”

Everyone, Moore says, understands that the question “Is this good?” can be distinguished from questions about whether things are pleasurable, or desired, or whatever else has been proposed as the definition of good. So, it only makes sense that good is a distinct concept and not merely one of these natural properties.

How to Deal with the Five-Year-Old

If Moore is right that our moral concepts are real but can’t be reduced to natural, empirically verifiable things in the world, then it would, in fact, be naïve of the probing five-year-old to expect that moral values can be explained in the same way that a horse can be explained to be a horse. But that doesn’t necessarily mean that our moral notions are baseless. Moore’s view is that our morality is intuitive and that moral truth is self-evident. So, when the five-year-old metaethicist asks why something is good, the answer is that it just is. And when he asks what makes it so, the answer is the property goodness. It might seem like these answers are dismissive, but remember Moore wrote a whole book working these same answers out.

Of course, some philosophers have given alternative interpretations of Moore’s thesis that morality can’t be demonstrated in natural terms. Rather than claiming that moral truths exist exist and are self-evident, like Moore does, these skeptics, called moral anti-realists, take the impossibility of inferring moral truths from non-moral truths as evidence that there is no objective morality. Try explaining this to a five-year-old: Morality is a sham!

If you want to give the kid what he’s likely asking for, you can try to justify your values according to readily available natural concepts. You can take the route of the moral naturalists and deny that the naturalistic fallacy is even a fallacy. You can embrace their view that moral truths exist and are natural facts just like anything else discoverable by science.

But first you have to get past Moore and Hume.

 

 

The “Forgotten” Bioethicist

In the bioethics field, praise is heaped upon Beauchamp and Childress (B and C)for their guiding text, “The Principles of Biomedical Ethics.” They assuredly reap rewards by adding revisions to this book – however so minor.

Before I furthered my bioethics training, I encountered another ethicist W.D. Ross. I thoroughly enjoyed his book, “The Right and the Good” for its insights and attempts to generate a complete ethical theory. It was the most robust work I had encountered through my readings.

Ross was a Scottish philosopher who died in 1971. He is well-regarded in the academic institutions which makes it sensible that B and C encountered his seminal text. He attempted to make a sound ethical theory. That is, one that could assist in almost every ethical problem encountered.

“Fidelity; reparation; gratitude; non-maleficence; justice; beneficence; and self-improvement”

These are the derived “principles” that Ross used to create his ethical guidance.

B and C, clearly looked to these principles for guidance. Even directly pulling “justice” and “beneficence” from him for their book.

He even addresses the critical element called “moral residue”.  This is an instance in which the principles have taken you to their limit, where you leave doing the best you can. It is an essential action that you took but it leaves you dissatisfied. Ross openly admits that life functions like this.

Leaning too heavily on the “perfect” action would cripple many people’s decision making. Prescriptive ethics can be pleasant and neat at times. Yet, there is an absence in them too. Are they “true” moral dilemmas if they can be resolved in a quick formula? Life can certainly function as a serious of simple events. When it gets difficult, that is when the more robust systems work. That is where true ethics lives.

The most significant feature is the limit and humility found in Ross’s text. He acknowledges our innate inability to morally fail. He doesn’t seek to coddle our damaged ego after we fail. He acknowledges life’s messy and imprecise nature; compared to the approach of B and C, which is used to inform bioethics and the entire field of human subject research ethics, it clarifies the difficulty of doing our moral duties. Even that we may struggle morally and ethical resolution may never happen.

I am convinced that these ethical complications make for our most stimulating works of fiction since they don’t provide a simple solution. A classic example is “Sofie’s Choice” where she is told to choose between one of her children. The other is fated to a certain death. She chooses but never recovers from knowing the choice she made. Afterwards, her conscience is torn asunder for the remainder of her life.

There is only real takeaway is that ethics shines best when it digs into the nuance. When it says, we can’t provide panacea. You will struggle with your choices…….and that is expected.

Law and Morality

It’s hard to pin down how exactly the law relates to morality.

Some people believe that acting ethically means simply following the law. If it’s legal, then it’s ethical, they say. Of course, a moment’s reflection reveals this view to be preposterous. Lying and breaking promises are legal in many contexts, but they’re nearly universally regarded as unethical. Paying workers the legal minimum wage is legal, but failing to pay workers a living wage is seen by some as immoral. Abortion has been legal since the early 1970’s, but many people still thinks it’s immoral. And discrimination based on race used to be legal, but laws outlawing it were passed because it was deemed immoral.

Law and morality do not always coincide. Sometimes the legal action isn’t the ethical one. People who realize this might have a counter-mantra: Just because it’s legal doesn’t mean that it’s ethical. This is more a sophisticated perspective than the one simply conflating law and ethics.

According to this perspective, acting legally is not necessarily acting ethically, but it is necessary for acting ethically. The law is the minimum requirement, but morality may require people to go above and beyond their basic legal obligations. From this perspective, paying employees at least the minimum wage is necessary for acting ethically, but morality may require that they be paid enough to support themselves and their families. Similarly, non-discrimination may be the minimum requirement, but one might think that actively recruiting and integrating minorities into the workplace is a moral imperative.

The notion that legal behavior is a necessary condition for ethical behavior seems to be a good general rule. Most illegal acts are indeed unethical. But what about the old laws prohibiting the education of slaves? Or anti-miscegenation laws criminalizing interracial marriage, which were on the books in some US states until the 1960s? It’s hard to argue that people who broke these laws necessarily acted unethically.

You could say that these laws are themselves immoral and that this places them in a different category than generally accepted laws. This is probably true. These legal obligations do indeed create dubious moral commitments. But how can you say that the moral commitments are dubious if law and morality are intertwined to the extent that one can’t act ethically without acting legally?

And aren’t there some conditions under which breaking a generally accepted law might be illegal but still the right thing to do?

What about breaking a generally accepted law to save a life? What if a man, after exhausting all other options, stole a cancer treatment to save his wife’s life? The legal bases for property rights and the prohibition against theft are generally accepted, and in most other contexts, the man would be condemned as both unethical and a criminal. But stealing the treatment to save his wife’s life seems, at the very least, morally acceptable. This type of situation suggests a counter-mantra to those who believe legality is a prerequisite for ethicality: Just because it’s illegal doesn’t mean it’s unethical.

This counter-mantra doesn’t suggest that the law is irrelevant to ethics. Most of the time, it’s completely relevant. Good laws are, generally speaking, or perhaps ideally speaking, a codification of our morality.

But the connection between law and morality is complex, and there may be no general rule that captures how the two are relates.

Sometimes, actions that are perfectly legal are nonetheless unethical. Other times, morality requires that we not only follow the law but that we go above and beyond our positive legal obligations. Yet, there are also those times when breaking the law is at least morally permissible.

There are also cases in which we are morally obligated to follow immoral laws, such as when defiance would be considerably more harmful than compliance. We live in a pluralistic society where laws are created democratically, so we can’t just flout all the laws we think are immoral – morality is hardly ever that black and white anyway. And respect for the rule of law is necessary for the stability of our society, so there should be a pretty high threshold for determining that breaking a law is morally obligatory.

If there is a mantra that adequately describes the relationship between law and morality, it goes something like this: It depends on the circumstances. 

 

 

 

 

 

Mother Night and the Call for Sincerity

Howard Campbell is a fictional character in the Kurt Vonnegut novel, “Mother Night”. This text has its protagonist appear to be a reprehensible soul. An American turned Nazi propagandist who we later find is working as a double agent. His charisma laden speeches are used to inspire der Volker and to provide hidden messages to the American forces. When the story begins Howard Campbell is in an Israeli holding cell – awaiting trial for his crimes as a Nazi. We learn the truth through his story.

There are so many great Kurt Vonnegut books. Why does this book mean to much to me? One quote resonates: “We are who we pretend to be, so we must be careful what we pretend to be.”

Even when I was reading the book, there drew some parallels to the various demagoguery on cable news. Almost a decade later, that quote comes home to roost. This story, much like Vonnegut’s carries some humor though.

Infowars grew to popularity in the early 2000’s and pushed conspiracy theories of all sorts. At its helm was a man named Alex Jones. He appeared to be a very zany personality that professed insane amounts of virility with deep understanding of the forces managing the world.

He is a character straight out of a wrestling promo. Until recently, it was so hard to glean anything else about this foolish being. His machismo runs rampant in the supplements he endorses. Brandishing a torso followed with explanations of how we can boost one’s manliness, muscles and, most importantly, attraction from the opposite sex.

His abrasive nature has helped push some joyful food for conspiracists such as: 9/11 was an inside job, Justice Anthony Scalia was murdered, and labeling the Newton shooting as a false flag. The last one has enabled a constant harassment of the victim’s families. An immeasurable agony inflicted by a talking head. Emboldened souls even take to calling and accusing the parents of faking everything – even their mourning.

The strongest of men. The most insightful person in media. Dodging any and all provocateurs.

Then he got divorced.

We bore witness to the travesty of his life and the struggle to contain his act. While in the courtroom, he was unable to answer simple questions and blamed a bowl of chili. The southwestern comfort food. His wife accused him of general foolishness that could only be permitted by a caricature of a man. Twelve years of marriage with someone as braggadocios as Alex Jones would garner some “interesting” tales. I am certain more tales will come.

One of the most prudent elements that came from when Jones’ lawyer. The lawyer conceded that Alex Jones’ radio and video personality was just that: a personality. What does that actually mean for our hero? He immediately released a video saying the lawyer was wrong! That it was just kabuki theater and to disregard the lawyer’s statement.

Should we get deep insights into the woes of personalities? Yes. Absolutely. The unmitigated power provided to the charlatans can only be mitigated by the light of their flaws. While Jones may be a family man, when he is not behind the microphone, his listeners/viewers can never be so certain. They can picture him as the champion for their ideology but he can see it as just something he does.

Vonnegut’s character seems to struggle. He loses his love, his freedom, and his integrity. Unlike a fictional character, I don’t have the privilege to know what resides in the hearts of men.  So, I am not sure what grew first and who is real anymore: Alex Jones or Alex Jones. Regardless, it doesn’t seem that he was sufficiently careful enough in his pretending.

 

Keep An Eye on the Rearview

“History does not repeat, but it does instruct.”

That’s Yale historian Timothy Snyder’s message in his new book On Tyranny: Twenty Lessons from the Twentieth Century. If we can learn anything from the past, it’s that democracies can collapse. It happened in Europe in the 1920s and 1930s, and then again after the Soviets began spreading authoritarian communism in the 1940s.

American democracy is no less vulnerable to tyrannical forces, Snyder warns. “Americans today are no wiser than the Europeans who saw democracy yield to fascism, Nazism, or communism in the twentieth century. Our one advantage is that we might learn from their experience.”

The short 126-page treatise – written in reaction to Donald Trump’s ascension to the Oval Office – looks at failed European democracies to highlight the “deep sources of tyranny” and offer ways to resist them:

Do not obey in advance. Defend institutions. Believe in truth. Contribute to good causes. Listen for dangerous words. Be as courageous as you can.

Inspired as it may be by Trumpian politics, On Tyranny is useful guide for defending democratic governance in any political era. As Snyder notes, looking to past democracies’ failures is an American tradition. As they built our country, the Founding Fathers considered the ways ancient democracies and republics declined. “The good news is that we can draw upon more recent and relevant examples than ancient Greece and Rome,” Snyder writes. “The bad news is that the history of modern democracy is also one of decline and fall.”

The actionable steps for defending our democratic institutions make up the bulk of On Tyranny, but Snyder’s big critique is of Americans’ relationship to history. It’s not just that we don’t know any; it’s that we’re dangerously anti-historical.

In the epilogue, Snyder observes that until recently the politics of inevitability – “the sense that history could move in only one direction: toward liberal democracy” – dominated American political thinking. After communism in eastern Europe ended, and the salience of its, and fascism’s and Nazism’s, destruction lessened, the myth of the “end of history” took hold. This misguided belief in the march toward ever-greater progress made us vulnerable, Snyder writes, because it “opened the way for precisely the kinds of regimes we told ourselves could never return.”

Snyder isn’t being particularly shrewd here. Commentators have explicitly endorsed versions of the politics of inevitability. After the Cold War, the political theorist Francis Fukuyama wrote the aptly titled article “The End of History?” (which was later expanded into a book, The End of History and the Last Man) arguing that history may have indeed reached its final chapter. In the “End of History?” he writes the following:

What we may be witnessing is not just the end of the Cold War, or the passing of a particular period of post-war history, but the end of history as such: that is, the end point of mankind’s ideological evolution and the universalization of Western liberal democracy as the final form of human government.

This is a teleological conception of the world – it views history as unfolding according to some preordained end or purpose. Marxism had its own teleology, the inevitable rise of a socialist utopia, based on Karl Marx’s materialist rewriting of Hegelian dialectics. After Marxism took a geopolitical blow in the early 1990s, Fukuyama reclaimed Hegelianism from Marx and argued that mankind’s ideological evolution had reached its true end: Liberal democracy had triumphed, and no alternatives could possibly replace it.

But Snyder isn’t referring just to the prophecies of erudite theorists like Fukuyama. He’s pointing the finger at everyone. When the Soviet Union collapsed, he writes, Americans and other liberals “drew the wrong conclusion: Rather than rejecting teleologies, we imagined that our own story was true.” We fell into a “self-induced intellectual coma” that constrained our imaginations, that barred from consideration a future with anything but liberal democracy.

A more recent way of considering the past is the politics of eternity. It’s historically oriented, but it has a suspect relationship with historical facts, Snyder writes. It yearns for a nonexistent past; it exalts periods that were dreadful in reality. And it views everything through a lens of victimization. “Eternity politicians bring us the past as a vast misty courtyard of illegible monuments to national victimhood, all of them equally distant from the present, all of them equally accessible for manipulation. Every reference to the past seems to involve an attack by some external enemy upon the purity of the nation.”

National populists endorse the politics of eternity. They revere the era in which democracies seemed most threatened and their rivals, the Nazis and the Soviets, seemed unstoppable. Brexit advocates, the National Front in France, the leaders of Russia, Poland, and Hungary, and the current American president, Snyder points out, all want to go back to some past epoch they imagine having been great.

“In the politics of eternity,” Snyder writes, “the seduction by a mythicized past prevents us from thinking about possible futures.” And the emphasis on national victimhood dampens any urge to self-correct:

Since the nation is defined by its inherent virtue rather than by its future potential, politics becomes a discussion of good and evil rather than a discussion of possible solutions to real problems. Since the crisis is permanent, the sense of emergency is always present; planning for the future seems impossible or even disloyal. How can we even think of reform when the enemy is always at the gate?

If the politics of inevitability is like an intellectual coma, the politics of eternity is like hypnosis: We stare at the spinning vortex of cyclical myth until we fall into a trance – and then we do something shocking at someone else’s orders.

The risk of shifting from the politics of inevitability to the politics of eternity is real, Snyder writes. We’re in danger of passing “from a naïve and flawed sort of democratic republic to a confused and cynical sort of fascist oligarchy.” When the myth of inevitable progress is shattered, people will look for another way of making sense of the world, and the smoothest path is from inevitability to eternity. “If you once believed that everything turns out well in the end, you can be persuaded that nothing turns out well in the end.”

The only thing that stands in the way of these anti-historical orientations, Snyder says, is history itself. “To understand one moment is to see the possibility of being the cocreator of another. History permits us to be responsible: not for everything, but for something.”

 

You’re Probably Not as Ethical as You Think You Are?

Bounded Ethicality and Ethical Fading

What if someone made you an offer that would benefit you personally but would require you to violate your ethical standards? What if you thought you could get away with a fraudulent act that would help you in your career?

Most of us think we would do the right thing. We tend to think of ourselves as honest and ethical people. And we tend to think that, when confronted with a morally dubious situation, we would stand up for our convictions and do the right thing.

But research in the field of behavioral ethics says otherwise. Contrary to our delusions of impenetrable virtue, we are no saints.

We’re all capable of acting unethically, and we often do so without even realizing it.

In their book Blind Spots: Why We Fail to Do What’s Right and What to Do About It, Max Bazerman and Ann Tenbrusel highlight the unintentional, but predictable, cognitive processes that result in people acting unethically. They make no claims about what is or is not ethical. Rather, they explore the ethical “blind spots,” rooted in human psychology, that prevent people from acting according to their own ethical standards. The authors are business ethicists and they emphasize the organizational setting, but their insights certainly apply to ethical decision making more generally.

The two most important concepts they introduce in Blind Spots are “bounded ethicality” and “ethical fading.”

Bounded Ethicality is derived from the political scientist Herbert Simon’s theory of bounded rationality – the idea that when people make decisions, they aren’t perfectly rational benefit maximizers, as classical economics suggests. Instead of choosing a course of action that maximizes their benefit, people accept a less than optimal but still good enough solution. They “satisfice” (a combination of “satisfy” and “suffice”), to use Simon’s term.

They do this because they don’t have access to all the relevant information, and even if they did, their minds wouldn’t have the capacity to adequately process it all. Thus, human rationality is bounded by informational and cognitive constraints.

Similarly, bounded ethicality refers to the cognitive constraints that limit people’s ability to think and act ethically in certain situations. These constraints blind individuals to the moral implications of their decisions, and they allow them to act in ways that violate the ethical standards that they endorse upon deeper reflection.

So, just as people aren’t rational benefit maximizers, they’re not saintly moral maximizers either.

Check out this video about bounded ethicality from the Ethics Unwrapped program at University of Texas Austin at Austin:

Ethical Fading is a process that contributes to bounded ethicality. It happens when the ethical implications of a decision are unintentionally disregarded during the decision-making process. When ethical considerations are absent from the decision criteria, it’s easier for people to violate their ethical convictions because they don’t even realize they’re doing so.

For example, a CEO might frame something as just a “business decision” and decide based on what will lead to the highest profit margin. Obviously, the most profitable decision might not be the most ethically defensible one. It may endanger employees, harm the environment, or even be illegal. But these considerations probably won’t come to mind if he’s only looking at the bottom line. And if they’re absent from the decision process, he could make an ethically suspect decision without even realizing it.

Check out this video about ethical fading from the Ethics Unwrapped program at University of Texas Austin at Austin.

You’re Not a Saint, So What Should You Do?

Nudge yourself toward morality.

Bazerman and Tenbrusel recommend preparing for decisions in advance. Consider the motivations that are likely to influence you at the time of the decision and develop proactive strategies to reduce their influence. Pre-commitment strategies are highly effective. If someone publicly pre-commits to an ethical action, he’s more likely to follow through than if he doesn’t. Likewise, pre-commiting to an intended ethical decision and sharing it with an unbiased and ethical person makes someone more likely to make the ethical decision in the future.

During actual decision making, it is crucial to elevate your abstract ethical values to the forefront of the decision-making process. Bazerman and Tenbrusel point out that “rather than thinking about the immediate payoff of an unethical choice, thinking about the values and principles that you believe should guide the decision may give the ‘should’ self a fighting chance.” One strategy for inducing this type of reflection, they say, is to think about your eulogy and what you’d want to be written about the values and principles you lived by.

There’s also the “mom litmus test.” When tempted by a potentially unethical choice, ask yourself whether you’d be comfortable telling your mom (or dad or anyone else you truly respect) about the decision. Imagining your mom’s reaction is likely to bring abstract principles to mind, they contend.

Yet another strategy for evoking ethical values is to change the structure of the decision. According to Bazerman and Tenbrusel, people are more likely to make the ethical choice if they have the chance to evaluate more than one option at a time. In one study, “individuals who evaluated two options at a time – an improvement in air quality (the ‘should’ choice) and a commodity such as a printer (the ‘want’ choice) – were more likely to choose the option that maximized the public good.” When participants evaluated these options independently, however, they were more likely to choose the printer.

In another study, people decided between two political candidates, one of higher integrity and one who promised more jobs. The people who evaluated the candidates side by side were more likely to pick the higher integrity candidate. Those who evaluated them independently were more likely to pick the one who would provide the jobs.

Bazerman and Tenbrusel say this evidence suggests that reformulating an ethical quandary into a choice between two options, the ethical one and the unethical one, is helpful because it highlights “the fact that by choosing the unethical action, you are not choosing the ethical action.”

What Implications Do Bounded Ethicality and Ethical Fading Have for Moral Responsibility?

Bazerman and Tenbrusel don’t address this question directly. But the notion that our default mode of ethical decision making in some circumstances is bounded by psychological and situational constraints – influences we’re not consciously aware of that affect our ethical decision-making abilities – seems to be in tension with the idea that we are fully morally responsible for all our actions.

The profit-maximizing CEO, for example, might be seen by his friends and peers as virtuous, caring, and thoughtful. He might care about his community and the environment, and he might genuinely believe that it’s unethical to endanger them. Still, he might unintentionally disregard the moral implications of illegally dumping toxic waste in the town river, harming the environment and putting citizens’ health at risk.

This would be unethical, for sure, but how blameworthy is he if he had yet to read Blind Spots and instead relied on his default psychology to make the decision? If ethical blind spots are constitutive elements of the human psyche, are the unethical actions caused by those blinds spots as blameworthy as those that aren’t?

Either way, we can’t be certain that we’d have acted any differently in the same circumstances.

We’ll all fail the saint test at some point, but that doesn’t make us devils.

Learn More About Behavioral Ethics

Blind Spots: Why We Fail to Do What’s Right and What to Do About It

Ethicalsystems.org (Decision Making)

Ethics Unwrapped (Behavioral Ethics)

Reflecting Politics: Image Making and Falsities

Hannah Arendt was a mid-century German thinker that witnessed humanity at its worst. As a consequence, her writings carry a profundity that I rarely found in the many authors I have read. I can lay out several prophetic examples encountered in her texts. Given the political climate, I will pull from her seminal essay, “Lying in Politics” which is found in the collection, Crises of the Republic.

Arendt laments the chance for “image-makers” to inject themselves into politics. Lobbyists and advertising men would possess a shared disinterest in things of actual politics and instead focus on the “image” of politics. The result is a politician whose image is refined to reflect a pious family man who votes against his constituents’ interests on the regular. The subterfuge from the Mad Men image consultants has driven us to accept this political farce at its face value or provided a deep doubt about the merits of ANY politician.

She anticipated the one of modern political crisis: destruction of a shared and knowable world. Specifically, this quote gives credence to this topic:

“The point is reached when the audience to which the lies are addressed is forced to disregard altogether the distinguishing line between truth and falsehood in order to be able to survive.” (Crises of the Republic 7)

When we meld image making with a disbelief there leaves only so much mental capacity to challenge. Our perception of reality, “truths”, can’t be easily parsed. We either accept an image maker’s tale or we distrust the entire world.

Yet, modern political discourse has generated another framework for survival. The tribalism of right-wing conservatism has lived within this dichotomous reality. The espousal of lies from these sources protects their observers from acknowledging shifts in modern living.  Shifting demographics and waning labor prospects have been successfully hidden by political conservatives. Also, no longer are the viewers/listeners/constituents the majority, and they most certainly are being fleeced media/politicans – the industries generated by their disregarding of truth.We see now, there are no coal jobs to bring back, robots aren’t going to resign and give you a factory job again. The pruned politician weaves this lie into every stump speech. Empowers the people who will insure their (re)election and the politician hops away in a overly polished SUV. Not a fleck of dirt.