Five Scientific Theories That Tell Us Why Things Are Funny

For some scholars, the study of humor is no laughing matter

If you’re an ordinary adult, you laugh around 20 times a day. And you probably haven’t given much thought to why the things you laugh at are funny. In fact, you might even think that analyzing humor is the best way to destroy it.

That’s what E.B. White thought. He said, “Humor can be dissected, as a frog can, but the thing dies in the process and the innards are discouraging to any but the pure scientific mind.”

He was correct in at least one those claims. Some scientists are interested in what makes things funny, and they’ve developed some pretty sophisticated explanations. Here are five of the major scientific theories of humor. Read with caution, as this article could kill your sense of humor.

The Relief Theory

The relief theory says that humor and laughter work as a pressure valve for releasing excess or unnecessary energy. Sigmund Freud was a proponent of the relief theory. He believed laughter is the release of either psychic energy that is normally used, in typical Freudian fashion, to repress feelings or psychic or emotional energy that was summoned in response to a stimulus but was then determined to be unnecessary.

The Arousal Theory

The arousal theory rejects the relief theory’s idea that humor involves the release of excess or unnecessary energy. Instead, it builds on the idea that the right level of physiological arousal causes subjective pleasure. Low levels of arousal are not enough to induce pleasure, and too high of levels are unpleasant. But there is a sweet spot that people enjoy. People laugh, according to the theory, when they are aroused to the point of discomfort (a joke setup) and then something (a punchline) causes their arousal level to suddenly drop into the sweet spot.

The Superiority Theory

The superiority theory says that aggression is at the core of all humor. Early theorists claimed humor was intertwined with actual aggression, but Charles Gruner, a contemporary advocate of the perspective, says humor is not real aggression. Rather, it’s a playful form of it rooted in an evolutionary context of competition. People find humor in others’ plights, when they assert their superiority over others, or when they simply outwit someone else, he says.

The Incongruity Theory

The incongruity theory is probably the most popular theory of humor today. It says the perception some sort of incongruity is necessary for thinking something is humorous. People laugh, for instance, when they experience something that’s surprising, atypical, or a violation or departure from the way they think things should be. Consider this joke about two fish in a tank. One says to the other, “You man the guns. I’ll drive.” We expect the fish to be in a fish tank, so their being in a combat vehicle is slightly humorous.

One shortcoming of incongruity theory is that incongruity alone isn’t enough to explain humor. A fish driving a tank may be funny because its incongruous, but some incongruous things aren’t funny, such as tragic accidents.

The Benign Violation Theory

The benign violation theory is the newest theory out there. It incorporates elements from some of the other theories, particularly incongruity and superiority, into one unifying one. It says people laugh when three things happen. First, there must be a violation of some norm or sense of how the world ought to be. Second, the person must judge the violation as playful, non-serious, or non-threatening. Third, the judgment that something is a violation and that it’s benign must occur simultaneously.

To get a better grasp of benign violation theory, think about malapropisms. They violate our linguistic norms, but they are not threatening. And they are almost always funny. Now think about sexist jokes. They violate our norms of gender equality, and they are probably funniest to sexists because sexists are most likely to see the violation as benign.

Now that you know some of the most famous theories of humor, keep them to yourself. Don’t be the buzzkill explaining the joke.

You’re Probably Not as Ethical as You Think You Are?

Bounded Ethicality and Ethical Fading

What if someone made you an offer that would benefit you personally but would require you to violate your ethical standards? What if you thought you could get away with a fraudulent act that would help you in your career?

Most of us think we would do the right thing. We tend to think of ourselves as honest and ethical people. And we tend to think that, when confronted with a morally dubious situation, we would stand up for our convictions and do the right thing.

But research in the field of behavioral ethics says otherwise. Contrary to our delusions of impenetrable virtue, we are no saints.

We’re all capable of acting unethically, and we often do so without even realizing it.

In their book Blind Spots: Why We Fail to Do What’s Right and What to Do About It, Max Bazerman and Ann Tenbrusel highlight the unintentional, but predictable, cognitive processes that result in people acting unethically. They make no claims about what is or is not ethical. Rather, they explore the ethical “blind spots,” rooted in human psychology, that prevent people from acting according to their own ethical standards. The authors are business ethicists and they emphasize the organizational setting, but their insights certainly apply to ethical decision making more generally.

The two most important concepts they introduce in Blind Spots are “bounded ethicality” and “ethical fading.”

Bounded Ethicality is derived from the political scientist Herbert Simon’s theory of bounded rationality – the idea that when people make decisions, they aren’t perfectly rational benefit maximizers, as classical economics suggests. Instead of choosing a course of action that maximizes their benefit, people accept a less than optimal but still good enough solution. They “satisfice” (a combination of “satisfy” and “suffice”), to use Simon’s term.

They do this because they don’t have access to all the relevant information, and even if they did, their minds wouldn’t have the capacity to adequately process it all. Thus, human rationality is bounded by informational and cognitive constraints.

Similarly, bounded ethicality refers to the cognitive constraints that limit people’s ability to think and act ethically in certain situations. These constraints blind individuals to the moral implications of their decisions, and they allow them to act in ways that violate the ethical standards that they endorse upon deeper reflection.

So, just as people aren’t rational benefit maximizers, they’re not saintly moral maximizers either.

Check out this video about bounded ethicality from the Ethics Unwrapped program at University of Texas Austin at Austin:

Ethical Fading is a process that contributes to bounded ethicality. It happens when the ethical implications of a decision are unintentionally disregarded during the decision-making process. When ethical considerations are absent from the decision criteria, it’s easier for people to violate their ethical convictions because they don’t even realize they’re doing so.

For example, a CEO might frame something as just a “business decision” and decide based on what will lead to the highest profit margin. Obviously, the most profitable decision might not be the most ethically defensible one. It may endanger employees, harm the environment, or even be illegal. But these considerations probably won’t come to mind if he’s only looking at the bottom line. And if they’re absent from the decision process, he could make an ethically suspect decision without even realizing it.

Check out this video about ethical fading from the Ethics Unwrapped program at University of Texas Austin at Austin.

You’re Not a Saint, So What Should You Do?

Nudge yourself toward morality.

Bazerman and Tenbrusel recommend preparing for decisions in advance. Consider the motivations that are likely to influence you at the time of the decision and develop proactive strategies to reduce their influence. Pre-commitment strategies are highly effective. If someone publicly pre-commits to an ethical action, he’s more likely to follow through than if he doesn’t. Likewise, pre-commiting to an intended ethical decision and sharing it with an unbiased and ethical person makes someone more likely to make the ethical decision in the future.

During actual decision making, it is crucial to elevate your abstract ethical values to the forefront of the decision-making process. Bazerman and Tenbrusel point out that “rather than thinking about the immediate payoff of an unethical choice, thinking about the values and principles that you believe should guide the decision may give the ‘should’ self a fighting chance.” One strategy for inducing this type of reflection, they say, is to think about your eulogy and what you’d want to be written about the values and principles you lived by.

There’s also the “mom litmus test.” When tempted by a potentially unethical choice, ask yourself whether you’d be comfortable telling your mom (or dad or anyone else you truly respect) about the decision. Imagining your mom’s reaction is likely to bring abstract principles to mind, they contend.

Yet another strategy for evoking ethical values is to change the structure of the decision. According to Bazerman and Tenbrusel, people are more likely to make the ethical choice if they have the chance to evaluate more than one option at a time. In one study, “individuals who evaluated two options at a time – an improvement in air quality (the ‘should’ choice) and a commodity such as a printer (the ‘want’ choice) – were more likely to choose the option that maximized the public good.” When participants evaluated these options independently, however, they were more likely to choose the printer.

In another study, people decided between two political candidates, one of higher integrity and one who promised more jobs. The people who evaluated the candidates side by side were more likely to pick the higher integrity candidate. Those who evaluated them independently were more likely to pick the one who would provide the jobs.

Bazerman and Tenbrusel say this evidence suggests that reformulating an ethical quandary into a choice between two options, the ethical one and the unethical one, is helpful because it highlights “the fact that by choosing the unethical action, you are not choosing the ethical action.”

What Implications Do Bounded Ethicality and Ethical Fading Have for Moral Responsibility?

Bazerman and Tenbrusel don’t address this question directly. But the notion that our default mode of ethical decision making in some circumstances is bounded by psychological and situational constraints – influences we’re not consciously aware of that affect our ethical decision-making abilities – seems to be in tension with the idea that we are fully morally responsible for all our actions.

The profit-maximizing CEO, for example, might be seen by his friends and peers as virtuous, caring, and thoughtful. He might care about his community and the environment, and he might genuinely believe that it’s unethical to endanger them. Still, he might unintentionally disregard the moral implications of illegally dumping toxic waste in the town river, harming the environment and putting citizens’ health at risk.

This would be unethical, for sure, but how blameworthy is he if he had yet to read Blind Spots and instead relied on his default psychology to make the decision? If ethical blind spots are constitutive elements of the human psyche, are the unethical actions caused by those blinds spots as blameworthy as those that aren’t?

Either way, we can’t be certain that we’d have acted any differently in the same circumstances.

We’ll all fail the saint test at some point, but that doesn’t make us devils.

Learn More About Behavioral Ethics

Blind Spots: Why We Fail to Do What’s Right and What to Do About It

Ethicalsystems.org (Decision Making)

Ethics Unwrapped (Behavioral Ethics)