Introduction
Imagine you're hiking through a beautiful forest on a sunny spring day. As you walk along the trail, you accidentally drop a glass bottle, and it shatters. You know that if you leave the broken glass there, a child will eventually cut herself badly on it. Does it matter when this injury might occur? Whether it's a week, a decade, or even a century from now, the outcome is the same: a hurt child. This simple thought experiment illustrates a fundamental idea: future people count. They are real people who will experience pain, joy, and dreams, just like us. The only difference is that they don't exist yet.
This concept forms the basis of longtermism, a philosophy that argues that future people deserve our consideration and effort. In "What We Owe the Future," William MacAskill explores this idea and its implications for how we should approach our responsibilities to future generations.
Why Future People Matter
Consider this scenario: What if you knew you would have to live through the full lives of every person in the future, from their birth to their death, regardless of how good or bad those lives might be? Wouldn't you want us, in the present, to take actions that would improve the quality of those future lives? Of course, you would.
MacAskill argues that we have both the obligation and the ability to improve the lives of future people. The sheer number of potential future individuals is staggering. If humanity survives until Earth becomes uninhabitable in hundreds of millions of years, there could be a million future people for every one person alive today.
All of these lives could either flourish or suffer, and our actions today have a significant influence on that outcome. History has shown that we have the power to improve life expectancy, reduce poverty, increase literacy, and influence various other positive trends. However, we also have the capacity to create terrible outcomes, as evidenced by the totalitarian regimes of the 20th century.
One of our most crucial responsibilities is to avoid causing our own extinction, ensuring that there will be future people at all. This theme recurs throughout the book as MacAskill explores various existential risks and potential solutions.
The Non-Inevitability of Moral Progress
Many people assume that moral progress is inevitable, but MacAskill challenges this notion by examining the history of slavery. Today, we consider slavery abhorrent and unacceptable. However, it was practiced in most cultures, in most places, throughout most of history. It was economically profitable, had persisted for ages, and was defended by influential people. So why and how was it abolished?
MacAskill argues that abolition wasn't inevitable but rather the result of specific events and factors. For example, the activism of a small group of Quakers in the 18th and 19th centuries played a crucial role. They formed the first organization in history to conduct an abolition campaign, inspiring a generation of influential British abolitionists.
This example demonstrates that moral beliefs can change dramatically, but such changes are not guaranteed. From a longtermist perspective, the ability to influence societal values is incredibly important.
Values can be highly persistent, as evidenced by the enduring influence of ancient religious texts like the Bible and the Quran. Because of this persistence, we must be cautious about value lock-in: any event that causes a single value system to persist for an extremely long time. If value lock-in were to occur on a global scale, the future's quality would largely depend on which values became locked in.
Fortunately, our current moral landscape is still malleable, like molten glass. Different moral views can compete and influence the final shape. However, technological advances could potentially end this flexibility, as we'll explore next.
The Potential Impact of Artificial General Intelligence
One technology that poses a significant risk of value lock-in is artificial general intelligence (AGI): a system that can learn and perform a broad range of tasks at least as well as humans. AGI is concerning for two main reasons:
- It could potentially accelerate technological and economic growth dramatically.
- It is potentially immortal, as software can be easily replicated and is difficult to destroy completely.
With AGI, a person, group, or country could create intelligent, productive agents with goals closely aligned to their own. These AGIs could then act on and advance those goals indefinitely. They could even be programmed to achieve a very specific future or to emulate someone's brain structure perpetually.
This technology could potentially be used to eliminate competing moral views, a scenario that has historical precedents in religious crusades and ideological purges.
While the timeline for AGI development is uncertain, even if it's centuries away, we should still be concerned about value lock-in. Whatever happens in the intervening time could affect which values eventually become locked in. If one value system becomes globally dominant, there would be little pressure for it to change over time, potentially persisting for thousands of years or even indefinitely with AGI.
To mitigate this risk, MacAskill suggests aiming for a morally exploratory world – one where better norms and institutions are more likely to prevail over time. This approach would allow us to converge on the best possible society gradually.
He also advocates for political experimentalism, such as developing charter cities: autonomous communities operating under different laws from their surrounding countries. These could serve as testing grounds for various value systems, helping us empirically determine which sets of values lead to the best outcomes for society.
Existential Risks and Civilizational Collapse
MacAskill discusses several potential existential risks that could lead to human extinction or civilizational collapse. One example he uses to illustrate our ability to mitigate such risks is the Spaceguard initiative.
In 1994, comet Shoemaker-Levy 9 crashed into Jupiter with devastating force. This event brought public attention to the potential threat of asteroid impacts, leading to increased advocacy among scientists. As a result, Congress launched the Spaceguard initiative in 1998, tasking NASA with finding 90% of all near-Earth asteroids and comets larger than one kilometer within a decade. The initiative was successful, reducing our risk of being hit by an asteroid by a factor of 100.
This example demonstrates that humanity can effectively address existential threats when we take them seriously. However, there are currently much greater risks than asteroids that require our attention.
One significant risk is the potential for an engineered pandemic – an outbreak of a disease designed using biotechnology. Engineered pathogens could combine dangerous characteristics, such as the lethality of Ebola with the contagiousness of measles. Additional risks come from the potential for easy replication by individuals and the often lax safety standards in biotech laboratories.
MacAskill cites estimates suggesting a 0.6% to 3% probability of an extinction-level engineered pandemic occurring this century. However, he emphasizes that we should be concerned not only about extinction but also about scenarios where civilization collapses but doesn't completely die out.
In a collapse scenario, our ability to recover would depend on various factors. Even if most people died, physical infrastructure, machines, libraries, and digital archives containing our knowledge would likely remain usable. However, the depletion of easily accessible fossil fuel reserves could seriously hamper our ability to reindustrialize.
Historically, fossil fuels have been critical for industrialization. As we deplete these resources, it becomes increasingly difficult to recover from a potential collapse. While renewable energy sources like solar and wind farms might provide some electricity, they degrade over decades and require advanced international supply chains to replace. Moreover, they can't provide the high-temperature heat needed for producing essential materials like cement, steel, brick, and glass.
Safeguarding the Future
Given the various existential risks and potential negative outcomes for civilization, MacAskill offers guidance on how individuals can help safeguard the future. He provides three general rules of thumb:
Take actions that are robustly good or that you're confident are beneficial. For example, promoting innovation in clean technology helps keep fossil fuels in the ground, mitigates climate change, advances technological progress, and reduces deaths from air pollution.
Increase the number of options available to you. Certain career paths, like pursuing a Ph.D. in economics or statistics, open up more opportunities than others.
Keep learning more. Continuously build your knowledge about different causes and important issues, both individually and as a society.
When choosing a specific problem to focus on, MacAskill emphasizes the importance of prioritization. While many people choose causes close to their hearts, these may not have the highest global impact. He suggests focusing on high-impact areas such as value lock-in, AGI, biotechnology, climate change, and technological stagnation.
Once you've chosen a problem, MacAskill recommends several high-impact actions:
Donating money to highly effective charities. For example, a one-time donation of $3,000 to the Clean Air Task Force could potentially reduce global carbon dioxide emissions by 3,000 tons per year – far more than the impact of going vegetarian for a lifetime.
Engaging in political activism, including voting and campaigning for important causes.
Spreading good ideas among family and friends through discussion, which can increase political participation and motivate people to work on important issues.
Having children. While children do produce carbon emissions, they also contribute to society, innovate, and advocate for political change. Moreover, having children helps reduce the risk of technological stagnation, which is crucial for developing the tools we need to address existential threats.
MacAskill emphasizes that one person can make a difference. Every social and political movement throughout history resulted from combinations of individual efforts. By taking action, we can help steer the future toward a better trajectory for all future people yet to be born.
The Importance of Career Choice
One of the most impactful decisions an individual can make is their choice of career. The average person spends about 80,000 hours at work throughout their lifetime, yet many find their jobs unfulfilling and unimpactful. MacAskill suggests treating career choices like scientific hypotheses:
- Spend significant time researching options
- Make an educated guess about the best longer-term path
- Try it for a couple of years
- Update your hypothesis based on your experience
- Repeat as needed
This iterative approach allows you to continually move toward the best option for both yourself and the world. Given the potential impact of your career on addressing long-term challenges, it's crucial to approach this decision thoughtfully and systematically.
Conclusion
"What We Owe the Future" presents a compelling case for longtermism – the idea that we have a moral obligation to consider and act on behalf of future generations. MacAskill argues that our actions today can have an enormous impact on the lives of countless future individuals, and that we have both the responsibility and the ability to shape a better future.
The book explores various existential risks, from engineered pandemics to the potential dangers of artificial general intelligence, and emphasizes the importance of safeguarding against these threats. It also highlights the non-inevitability of moral progress and the need to actively work towards creating a more ethical and sustainable world.
MacAskill provides practical advice for individuals who want to make a difference, suggesting high-impact actions such as strategic charitable giving, political activism, and careful career choice. He emphasizes that while the challenges we face are significant, each person has the power to contribute meaningfully to shaping a better future.
Ultimately, "What We Owe the Future" is a call to action, urging readers to think beyond their immediate circumstances and consider the long-term consequences of their choices. By adopting a longtermist perspective, we can work together to create a flourishing future for generations to come.