Introduction
In today's polarized world, heated debates rage on about politics, religion, and social issues. People on opposing sides often seem to be speaking different languages, each convinced of their moral superiority. But what if there was a way to bridge these divides and find common ground?
In "Moral Tribes," Harvard psychologist Joshua Greene explores the evolutionary roots of human morality and offers a framework for resolving conflicts between groups with different moral values. Drawing on philosophy, psychology, and neuroscience, Greene provides fascinating insights into how our minds approach moral decisions and why we struggle to cooperate across tribal lines.
This book summary will explore Greene's key ideas about the origins of morality, the challenges of inter-group cooperation, and potential solutions for overcoming our tribal instincts to make better collective decisions. By understanding the science behind our moral intuitions, we can learn to think more critically about ethics and work towards greater harmony in an increasingly interconnected world.
The Evolution of Morality
Cooperation Within Groups
Greene begins by explaining how human morality evolved as an adaptation to promote cooperation within small groups. Our ancestors lived in tight-knit tribes where working together was essential for survival. Over time, we developed moral emotions and intuitions that encouraged prosocial behavior within our in-group.
These innate moral instincts include:
- Empathy and compassion for others in our group
- A sense of fairness and reciprocity
- Loyalty to the tribe
- Respect for authority and hierarchy
- Notions of purity and sanctity
These moral foundations helped tribes function cohesively and outcompete other groups. As a result, we evolved to have strong intuitions about right and wrong within our immediate social circles.
The Challenges of Modern Morality
While our moral instincts served us well in small-scale societies, they are poorly suited to handle the complex ethical dilemmas of the modern world. Some key challenges include:
- Extending moral consideration to out-groups and strangers
- Balancing individual rights with collective welfare
- Dealing with abstract, large-scale problems like climate change
- Resolving conflicts between groups with different moral values
Greene argues that to address these challenges, we need to move beyond our automatic moral intuitions and engage in more deliberate moral reasoning.
Two Modes of Moral Thinking
Greene proposes that we have two distinct modes of moral cognition:
- Automatic mode: Fast, intuitive, emotional responses
- Manual mode: Slow, deliberate, reasoned analysis
Automatic Mode
Our automatic moral responses are shaped by evolution, culture, and personal experience. They allow us to make quick judgments in familiar situations without taxing our mental resources.
For example, if we see someone in immediate danger, we instinctively feel compelled to help without carefully weighing the pros and cons. This automatic empathy and altruism towards others in our group was adaptive in our ancestral environment.
However, automatic mode can also lead us astray when dealing with novel moral dilemmas. Our gut reactions may be biased or inconsistent when applied to unfamiliar scenarios.
Manual Mode
Manual mode involves consciously reasoning through moral issues, considering different perspectives, and trying to arrive at principled conclusions. It requires more mental effort but allows for more nuanced and impartial judgments.
Greene argues that we should engage manual mode more often, especially when dealing with complex social and political issues. By overriding our intuitive responses, we can expand our circle of moral consideration and find common ground with those who have different values.
The Tragedy of Commonsense Morality
One of Greene's central arguments is that many conflicts arise from what he calls the "tragedy of commonsense morality." This occurs when different groups each follow their own intuitive moral values, leading to clashes and suboptimal outcomes for everyone.
He illustrates this with the example of a controversy that erupted when a Danish newspaper published cartoons depicting the Prophet Muhammad in 2005. From the perspective of the Danish journalists, they were upholding the important value of free speech. But for many Muslims, the cartoons were a grave insult to their religious values.
Both sides were acting based on their intuitive sense of right and wrong. But the result was violence and increased hostility between Western and Muslim societies. Greene argues that in such cases, we need to step back from our automatic moral responses and try to find compromise solutions that respect different values.
The Prisoner's Dilemma and Cooperation
To understand the challenges of cooperation between groups, Greene explores game theory concepts like the famous Prisoner's Dilemma. This thought experiment illustrates why rational self-interest can lead to worse outcomes for everyone.
In the classic setup, two criminals are arrested and separated. Each is given the choice to either stay silent or betray their partner. The payoffs are:
- If both stay silent, they each get 1 year in prison
- If both betray, they each get 2 years
- If one betrays and one stays silent, the betrayer goes free and the silent one gets 3 years
Rationally, each prisoner's best move is to betray, regardless of what the other does. But if both follow this logic, they end up with 2 years each - a worse outcome than if they had cooperated by staying silent.
This dilemma illustrates why cooperation can be difficult to achieve, even when it would benefit everyone. Greene argues that our moral instincts evolved partly to solve prisoner's dilemma-type situations within tribes. Emotions like loyalty, trust, and reciprocity help us overcome narrow self-interest.
However, these moral instincts don't automatically extend to other groups. This is why we often see conflict and lack of cooperation between different tribes, nations, or ideological camps - even when working together would lead to better results for all.
Utilitarianism as a Common Currency
To resolve conflicts between groups with different values, Greene proposes adopting utilitarianism as a kind of moral "common currency." Utilitarianism is the ethical framework that judges actions based on their consequences, aiming to maximize overall well-being or happiness.
The key advantages of utilitarianism for resolving moral disputes are:
- It provides an impartial standard that doesn't favor any particular group's values
- It focuses on outcomes rather than abstract principles
- It aims to find compromises that benefit the greater good
Greene acknowledges that utilitarianism has some counterintuitive implications and potential pitfalls. For instance, a strict utilitarian calculus might justify sacrificing an innocent person to save a greater number. Most people's moral intuitions recoil at this idea.
However, he argues that utilitarianism is still the best available framework for navigating complex moral tradeoffs, especially at a societal level. By adopting a utilitarian perspective, we can more easily find common ground and mutually beneficial solutions.
The Footbridge Dilemma
To illustrate how our moral intuitions can conflict with utilitarian reasoning, Greene discusses the famous "footbridge dilemma":
Imagine a runaway trolley is about to kill five people on the tracks ahead. You are standing on a footbridge above the tracks, next to a large stranger. The only way to save the five people is to push the stranger off the bridge into the path of the trolley. This will kill him, but save the five. Should you do it?
Most people say no, they wouldn't push the man - even though from a utilitarian perspective, sacrificing one life to save five produces the best outcome. Greene argues this reveals the tension between our automatic moral intuitions (which recoil at the idea of personally harming someone) and more deliberate moral reasoning.
He suggests that our intuitive aversion to pushing the man comes from evolutionary adaptations that made us reluctant to directly harm others in our group. This was generally adaptive in our ancestral environment. But when facing abstract moral tradeoffs in the modern world, these intuitions can sometimes lead us astray.
Greene doesn't argue that we should always override our moral intuitions in favor of cold utilitarian calculus. But he believes we should be aware of the limitations of our gut reactions and be willing to engage in more careful moral reasoning, especially for large-scale social issues.
Dual-Process Theory of Moral Judgment
Based on neuroscience research, Greene proposes a dual-process theory of how we make moral judgments:
Automatic emotional responses: Quick, intuitive reactions driven by areas of the brain associated with emotion and social cognition.
Controlled cognitive processes: Slower, more deliberate reasoning involving areas associated with abstract thinking and problem-solving.
These two processes often work in tandem, but can also come into conflict. Greene and colleagues conducted brain imaging studies showing that people contemplating personal moral dilemmas (like the footbridge problem) showed more activity in emotional centers of the brain. Impersonal moral problems activated more cognitive areas.
This helps explain why we have such visceral reactions to certain moral scenarios, even if we can't articulate logical reasons for our judgments. It also suggests that by engaging our cognitive processes more actively, we may be able to overcome some of our innate biases.
Factors Influencing Moral Decision-Making
Greene explores various factors that shape our moral intuitions and decision-making:
Physical Distance and Personal Connection
We tend to feel a stronger moral imperative to help those who are physically closer or to whom we have a personal connection. For instance, most people say they would ruin an expensive suit to save a drowning child in front of them. But they may not donate an equivalent amount of money to save multiple children in a far-off country.
This bias likely stems from our evolution in small tribes where we mainly interacted with those nearby. But in today's interconnected world, it can lead to skewed priorities and inconsistent moral reasoning.
Cognitive Load
When our mental resources are taxed, we tend to rely more on automatic, emotional moral judgments rather than careful reasoning. In one study, participants under higher cognitive load (having to remember a long number) were more likely to choose an unhealthy snack over a healthy one.
This suggests that stress, time pressure, and mental fatigue can impair our ability to make sound moral decisions. It's important to create mental space for deliberation on important ethical issues.
Framing Effects
How a moral dilemma is framed can significantly influence our judgments. For instance, people are more likely to approve of an action that saves 80% of a group than one that lets 20% die - even though these are logically equivalent.
Being aware of such framing effects can help us approach moral issues more objectively and consistently.
Rights, Duties, and Pragmatism
Greene observes that many moral and political debates revolve around competing notions of rights and duties. For instance, in the abortion debate:
- Pro-choice advocates focus on women's rights to bodily autonomy
- Pro-life supporters emphasize the duty to protect unborn life
He argues that framing issues in terms of absolute rights and duties often leads to intractable conflicts. Instead, he proposes taking a more pragmatic, consequentialist approach.
Rather than debating abstract principles, we should ask questions like:
- What would be the real-world effects of banning or allowing abortion?
- How would it impact overall societal well-being?
- Are there compromises that could address concerns on both sides?
By focusing on practical outcomes rather than ideological purity, we're more likely to find workable solutions that most people can accept.
Expanding the Circle of Moral Consideration
One of the great moral achievements of human civilization has been gradually expanding our circle of ethical consideration. Over time, we've extended rights and moral status to wider groups:
- From family to tribe
- From tribe to nation
- From one's own race/ethnicity to all of humanity
- From humans to some animals
Greene argues we should continue this trajectory of moral expansion. By overcoming our tribal instincts and extending empathy to all conscious creatures, we can create a more ethical world.
Some ways to cultivate a more expansive moral perspective:
- Actively try to take the perspective of those unlike yourself
- Learn about the lives and experiences of people from different backgrounds
- Practice extending compassion to strangers and even adversaries
- Consider the welfare of future generations in your decisions
- Factor in the interests of animals and the environment
Overcoming Bias and Tribalism
Greene offers several strategies for counteracting our innate biases and tribal instincts:
Acknowledge Your Ignorance
On complex issues, most people hold strong opinions despite not really understanding the nuances. Force yourself to justify your views in detail. If you struggle, admit your ignorance and be more open to other perspectives.
Seek Out Diverse Viewpoints
Actively expose yourself to ideas and arguments from those you disagree with. Try to understand their reasoning and motivations rather than dismissing them.
Use the "Cognitive Speed Bump"
When faced with a charged issue, pause before reacting. Take a few deep breaths and try to engage your deliberative mental processes rather than going with your gut reaction.
Find Common Ground
Look for shared values and goals, even with those who seem to hold opposing views. This can be a starting point for productive dialogue.
Focus on Outcomes, Not Ideology
Instead of debating abstract principles, discuss the practical effects of different policies or actions. This makes it easier to find pragmatic compromises.
Applying Moral Philosophy to Real-World Issues
In the final sections, Greene explores how we can apply moral reasoning to thorny real-world problems. He emphasizes that there are rarely easy answers, but that adopting a more impartial, outcome-focused approach can help.
Some examples of applying utilitarian thinking to contemporary debates:
Climate Change
Instead of arguing about whether humans have a "right" to exploit nature or a "duty" to protect it, we should focus on the consequences of climate change for humanity and other species. This points towards taking strong action to reduce emissions and mitigate impacts.
Economic Inequality
Rather than debating abstract notions of what people "deserve," we should examine the effects of inequality on overall societal well-being. Some inequality may incentivize innovation, but extreme disparities likely reduce aggregate welfare.
Immigration
Moving beyond rhetoric about national sovereignty or immigrant rights, we should look at the net effects of immigration policies on both native and immigrant populations. This may point towards more open borders, with some regulations to manage integration.
Animal Welfare
Extending moral consideration to animals suggests we should work to reduce factory farming and other sources of animal suffering. This doesn't require believing animals have the same moral status as humans, just that their welfare matters to some degree.
Conclusion: A Metamorality for the Modern World
Greene concludes by advocating for what he calls a "metamorality" - a higher-level moral framework for adjudicating between competing value systems. He proposes a form of utilitarianism as the best candidate for this role.
Key takeaways:
Our innate moral instincts evolved for small-scale tribal life and are poorly suited to modern global challenges.
We need to engage our capacity for moral reasoning more actively, especially on complex social and political issues.
Adopting an impartial, consequence-focused ethical framework can help resolve conflicts between groups with different values.
We should work to expand our circle of moral consideration beyond our immediate tribe to all of humanity and even other species.
By understanding the psychology of morality, we can learn to overcome our biases and make more ethical decisions.
Greene acknowledges that creating a truly universal moral system is an immense challenge. But he argues that by combining the best of philosophy, psychology, and neuroscience, we can work towards a more cooperative and ethically sophisticated global society.
The path forward requires us to think critically about our moral intuitions, engage in good-faith dialogue across tribal lines, and always strive to consider the bigger picture. While we may never achieve perfect moral harmony, Greene's ideas offer a promising approach to navigating the ethical complexities of our interconnected world.