What if the very apps meant to connect us are instead the tools dividing us?
Social Media Taps Into Addictive Psychology
Social media platforms like Facebook and Twitter are strategically designed to exploit human psychology. By offering inconsistent rewards such as likes and comments, they create an addictive cycle akin to gambling. This method, called intermittent variable reinforcement, gives users dopamine hits, keeping them constantly engaged. People often find themselves spending hours scrolling, caught in a feedback loop of seeking social validation.
This addictiveness isn’t accidental. Sean Parker, Facebook’s first president, openly admitted that these apps were built to consume as much of users' time and attention as possible. Our innate need for connection and approval fuels this engagement. Social media feeds into our desire to define our identities and receive affirmation, which might seem harmless at first glance.
However, these tools can aggravate division. For instance, BuzzFeed-style identity lists like "31 Things Only People from a Small Town Will Understand" may seem lighthearted but reinforce an "us versus them" mindset. This dynamic can lead to conflicts, both online and in real life, as groups solidify against one another.
Examples
- Facebook and Instagram users often check their posts obsessively to see how many likes they've earned.
- Slot machines in casinos employ similar reinforcement techniques to keep players hooked.
- BuzzFeed articles reinforce social identity by appealing to niche group dynamics.
Expanded Social Circles Breed Conflict
Social media breaks through natural social limits, resulting in unintended consequences. Anthropologist Robin Dunbar proposed that humans can only manage about 150 meaningful relationships—the Dunbar limit—evolved from our time in small groups. To drive engagement, platforms like Facebook and Twitter expanded our network exposure by promoting engagement with “weak ties,” or friends of friends.
While this added connections, it also created complications. Expanded networks brought increased hostility and misunderstandings, as evidenced by studies on primates like rhesus monkeys. In larger groups, these animals focused more on hierarchies and control, leading to greater aggression. Humans, facing a digital equivalent, are similarly affected.
As examples emerged, algorithms began pushing users toward polarizing and extreme content. A parent might join a mainstream group but be algorithmically nudged toward anti-vaccine conspiracies. This overexposure strained users' social cognition, contributing to distrust and radicalization in online spaces.
Examples
- The average Facebook user in 2013 had 130 friends, but algorithms encouraged engaging with broader acquaintances.
- Rhesus monkey studies revealed heightened aggression in larger group sizes.
- Facebook’s algorithm once led standard parenting group members toward anti-vax conspiracies.
Outrage as a Propelling Force
Social media amplifies moral outrage, making it the currency of online interactions. Historically, small human groups relied on outrage to enforce social norms, ensuring accountability and maintaining harmony. These instincts now manifest digitally but at a far larger scale, with public shaming and viral posts replacing village gossip.
Platforms reward outrage by promoting polarizing content, boosting it to wider audiences. In 2020, a Central Park confrontation became a global sensation after a video of a woman falsely accusing a birdwatcher of threatening her went viral. While the internet demanded harsh punishments, the original target of the confrontation later questioned the fallout, admitting the backlash felt excessive.
The mechanism for amplifying outrage is simple: it drives engagement. Users feel validated when joining collective outrage, receiving likes and comments for their responses. However, this dynamic fuels division rather than resolution, leading to disproportionate consequences and escalating animosity.
Examples
- The Central Park birdwatching incident reached 40 million views, stirring excessive social repercussions.
- Outrageous posts receive higher virality due to platform algorithms.
- The dopamine hit from expressing outrage mirrors our evolutionary instinct to enforce order.
The Algorithm Magnifies Misinformation
Algorithms are powerful tools that shape user experiences, often for the worse. Designed to maximize engagement, these systems prioritize content tailored to evoke strong emotions, including fear, anger, or fascination. This makes them the perfect conduits for spreading misinformation and divisive ideas.
During the COVID-19 pandemic, misinformation, such as false claims about vaccines and the virus's origins, spread far and wide. Platforms like Facebook did little to stop the tide until pressure mounted. Worse still, conspiracy theorists like Alex Jones and groups spreading fake election narratives thrived, further dividing public opinion.
The risks extend beyond entertainment or annoyance. In Myanmar, Facebook was accused of enabling hate speech that contributed to genocide against the Rohingya minority. These extreme real-world effects highlight how the structure of algorithm-driven platforms can spiral out of control.
Examples
- Facebook employees staged a walkout over inaction on Donald Trump’s inflammatory posts.
- YouTube’s algorithms promoted Alex Jones’s conspiracy content.
- Facebook’s failure to act in Myanmar exacerbated ethnic violence and genocide.
January 6th: A Movement Made by Social Media
The storming of the U.S. Capitol on January 6, 2021, exemplified how misinformation can manifest in violent action. Many rioters were motivated by conspiracy theories about election fraud—content they consumed on Facebook, Twitter, and YouTube. They believed they were acting patriotically because of what they read online.
Social media posts rallied thousands for the event, echoing Donald Trump’s tweets like "Big protest in DC on January 6th. Be there. We'll be wild." Once there, participants used platforms to share updates, further fueling the chaos. Tragically, the event led to injuries and deaths, as well as severe damage to trust in the democratic process.
The Capitol siege revealed not just the danger of these platforms but also their power. Platforms temporarily banned Trump afterward, demonstrating they could act. But by then, the damage—wrought through years of unchecked falsehoods—was done.
Examples
- QAnon believers shared election misinformation, deepening radical beliefs.
- Ashley Babbitt, a Capitol rioter, wore a Trump flag as a cape when killed.
- Trump’s rallying tweets directly mobilized participants for the January 6th event.
Profit Over People
Social media giants prioritize money over user safety, as whistleblowers and internal documents have shown. Frances Haugen, a former Facebook employee, revealed damning insights into the company’s practices in 2021. She claimed Facebook’s refusal to modify algorithms evidenced a preference for profits.
Haugen presented documents that showed Facebook knew about the harms caused by misinformation and hate speech on its platform. Still, executives chose not to intervene, fearing that changes would cut user engagement and, consequently, revenue. This grim reality points to an unsustainable model focused purely on growth.
Though some clamored for reform, meaningful changes to these platforms remain elusive. The companies seem unwilling to adopt solutions that might hamper their bottom line, even if it means reducing harm.
Examples
- Frances Haugen accused Facebook of ignoring vaccine misinformation in favor of ad revenue.
- Internal Facebook reports flagged rising hate speech but spurred no action.
- Haugen's whistleblower interview on "60 Minutes" exposed systemic negligence.
The Algorithm Is the Real Danger
Algorithms decide what content dominates our feeds, shaping how users perceive the world. This system encourages the spread of sensational and divisive content, turning social platforms into outrage machines. With algorithms at the center of every click, they exert immense influence on behavior.
Some tech critics argue that turning off these algorithms could reduce their manipulative power. Without them, viral controversies and misinformation might lose momentum. However, the platforms remain reluctant, knowing this move would decrease user engagement.
While algorithms aren’t the sole problem, they’re undeniably central. Small tweaks or policy changes won't fix the issue entirely. A larger reset may be required to realign social media with healthier human dynamics.
Examples
- Facebook algorithms promoted posts that led users into extreme advocacy groups.
- Outrage-driven content received higher visibility due to algorithmic preferences.
- Critics argue disabling algorithms would slow the spread of harmful materials.
Could We Turn It All Off?
The book raises the question: Should we step away from social media entirely? Max Fisher explores this idea, likening toxic platforms to Hal, the cognitive computer in "2001: A Space Odyssey," whose malfunction forced humans to shut it down for good.
While removing social media entirely might seem dramatic, experts suggest scaling back profoundly. Reducing its interwoven influence in people's lives might lead to a calmer and more authentic digital experience. Fisher points out that it might involve losing entertaining content but could create space for societal healing.
Ultimately, the book suggests, social media is a tool we’ve lost control over. Taking drastic action might be the only way to reverse its damage.
Examples
- Frances Haugen suggesting changes to rid platforms of algorithms.
- Comparisons to Hal’s shutdown in "2001: A Space Odyssey" highlight the need for radical decisions.
- Reduced internet use might lead to less exposure to polarized content.
Takeaways
- Limit your time on social media by scheduling specific periods for checking platforms and sticking to them.
- Diversify your news consumption by relying on trustworthy, non-algorithmic sources instead of social feeds.
- Engage critically online rather than sharing emotionally reactive posts, and verify content before amplifying it.