Book cover of Calling Bullshit by Carl T. Bergstrom

Carl T. Bergstrom

Calling Bullshit

Reading time icon12 min readRating icon4.1 (4,591 ratings)

Why is it that in a world flooded with information, we still fall for so much bullshit? This book helps equip you with the tools to recognize and combat it.

1. The Everlasting Presence of Falsehoods

Bullshit is not a new phenomenon. Its existence traces back to ancient times when Plato accused Sophists of valuing argument-winning over truth-seeking. Today, it pervades modern media, scientific communities, and everyday conversations. Ignoring it comes with consequences. A poignant example lies in Andrew Wakefield's 1998 study falsely linking vaccines to autism, sparking the antivax movement that persists and results in increased outbreaks of preventable diseases like measles.

The environment for such falsehoods has significantly evolved. Social media amplifies the reach of these fabrications, allowing misinformation to spread globally in seconds. Coupled with sensational reporting by hyper-partisan news outlets, lies often gain more traction than truths, wreaking societal havoc.

Adding image manipulation and fake news factories to the mix deepens this crisis. For example, after the Boston Marathon bombing in 2013, false reports surfacing online claimed a child from the Sandy Hook tragedy had died during the race. Shared tens of thousands of times, the untruth was hard to retract. This highlights the urgency of vigilance and action.

Examples

  • Wakefield’s discredited vaccine study led to measles resurges globally.
  • Social media's rapid dissemination of the Boston Marathon hoax.
  • Plato’s criticism of the Sophists sowed seeds of suspicion toward persuasive rhetoric.

2. Bullshitters Value Persuasion Over Truth

Bullshit thrives because certain individuals prioritize persuasion, not evidence. While liars know they are lying, bullshitters often don’t care if their arguments align with reality as long as they sway opinions. They mask their points with complexity, hiding their lack of substance.

Bruno Latour’s concept of “black boxes” illustrates manipulation through opaque processes. Take the 2016 study claiming criminals’ facial features differ from non-criminals. Its algorithm produced flashy results, but it ignored the key flaw: government ID photos of criminals versus headshots of non-criminals altered facial expressions.

Often, the manipulation isn't intentional. Researchers sometimes choose evidence that validates their beliefs, overlooking inconsistencies. This tendency often leads to oversights and unsupported conclusions, further enabling bullshit to flourish unchecked.

Examples

  • Criminal identity study linked slight facial features to crime but used flawed data sets.
  • Misleading marketing often overwhelms audiences with elegant but hollow graphics.
  • Black box studies obscure their methods, making their validity harder to critique.

3. “Correlation Does Not Imply Causation”

Scientific observations frequently uncover correlations, but a correlation doesn’t automatically indicate cause-and-effect. Misinterpreting such findings leads to false conclusions, often amplified by media in search of catchy headlines.

For instance, Zillow reported a link between rising house prices and a decline in childbirth among women in their late twenties. Later scrutiny revealed these factors had no direct cause-effect relationship; other socioeconomic considerations, like postponing parenthood, likely influenced both trends.

Correlations often occur by coincidence. Consider the stark rise in organic food sales mirrored by increasing autism diagnoses. The data present similar trajectories, but suggesting organic food contributes to autism would be absurd. Misread narratives like these fuel the bullshit machine.

Examples

  • Press misconstrued Zillow’s housing and fertility correlation as cause and effect.
  • Organic food sales and autism rates show statistical alignment devoid of causality.
  • College kissing studies linked self-esteem with early romance without clarifying external influences.

4. Numbers Can Easily Be Manipulated

Statistics often carry persuasive weight, appearing objective and trustworthy. However, the way they’re framed can mislead audiences. Carl Bergstrom experienced this firsthand when hot cocoa packaging bragged about being 99.9% caffeine-free – a figure no more special than decaffeinated coffee.

Specific numbers provoke exaggerated reactions. For example, Breitbart described 2,139 DACA (Deferred Action for Childhood Arrivals) recipients as having criminal records. Taken alone, this number stoked fear, even though only a minuscule proportion of DACA recipients were implicated compared to the broader U.S. population.

Another tactic involves convoluting percentage differences versus absolute percentage points. Alcohol studies claiming daily consumption raises health risks by 0.5% might alarm readers, but overlook that for abstainers, the baseline risk is 1% — making this increase negligible.

Examples

  • Cocoa marketing used an inflated percentage figure to sell a basic fact.
  • Breitbart sensationalized low-level crimes among DACA recipients.
  • Percentages often mislead when not contextualized with raw, absolute values.

5. Selection Bias Warps Perceptions

Sampling errors frequently distort data interpretation. Selection bias occurs when samples don’t represent a population accurately. For example, studies on Dutch heights might mislead by inadvertently focusing on basketball players, falsely inflating national averages.

Car insurance companies exploit this principle by showcasing average savings of $500 for policy switchers. But such statistics ignore the majority of consumers who don’t experience reductions. As such, the sample only reflects cases where savings were guaranteed, skewing perceptions.

This issue spills into clinical studies as well. If patients leave drug trials due to side effects, their exclusions reshape statistical outcomes. The resulting data skew often masks the darker side of experimentation, misleading trusting audiences.

Examples

  • Height surveys risk distortion by including professional athletes.
  • Insurance savings statistics rely on selective user data to give false impressions.
  • Dropouts in medical trials lead to incomplete drug safety evaluations.

6. Big Data Isn’t Always Reliable

Advancements like big data and machine learning promise objectivity, yet often fail due to flawed inputs. Without reliable underlying data, algorithms learn incorrect patterns, reaching nonsensical conclusions. Machine learning algorithms excel at correlations but fail with predictive logic.

For instance, Google's Flu Trends algorithm correlated "high school basketball" searches with flu prevalence, as both peaked in winter. While mathematically striking, such links held no logical medical basis. Unsurprisingly, its predictions grew inconsistent over time.

Similarly, an X-ray scanning AI successfully flagged certain heart issues but did so by recognizing irrelevant text annotations from utilizing a specific machine. Tested with new data, its effectiveness vanished. This highlights the importance of human expertise.

Examples

  • Google Flu Trends falsely linked flu spread with unrelated internet search trends.
  • Chest X-ray detection algorithms tied diseases to specific imaging devices by accident.
  • Machine-learning criminal face studies linked flawed details unrelated to personality traits.

7. Science Isn’t Free from Flaws

While science relies on self-correction over time, it isn’t immune to error. Researchers prioritize publishing positive results to boost influence, ignoring less exciting failures. This creates a publication bias, where discoveries seem far more impressive than reality.

Statistical manipulation, like p-hacking, compounds these issues. Researchers often tweak experiment parameters until results meet pre-established expectations. Gaming the 0.5 statistical benchmark aligns results artificially with accepted norms, misleading readers and fellow scientists alike.

These issues don’t stay confined to labs. Less reputable journals readily accept dubious studies for a fee. To avoid falling victim to bad science, note the reputation of the journals reporting extraordinary claims.

Examples

  • P-hacking often skews statistical results to appear legitimate within stringent guidelines.
  • Misleadingly small journals publish attention-grabbing claims without validation.
  • Failure data suppression hides experiments that contradict popular studies.

8. Skepticism Remains Your Strongest Tool

To identify bullshit, approach information skeptically. Whenever facts seem implausible, run mental “Fermi estimates” – rough calculations gauging scale. For example, if someone claims exaggerated populations of John Smiths, double-check assumptions using estimated UK population figures multiplied by naming probability percentages.

Avoid falling prey to confirmation bias, wherein individuals validate pre-existing opinions with selective “evidence.” Actively question sources that resonate too conveniently with personal beliefs by investigating objectivity behind shared articles.

Lastly, remain cautious about online reliability. Tweets, Facebook shares, unverified studies—all represent dubious foundations when critically dissected. Share conscientiously rather than joining the disinformation cycle.

Examples

  • Mental Fermi approximations counter exaggerated John Smith statistical estimates.
  • Crusading narratives targeting confirmation bias avoids narrow perspective reinforcement.
  • Verifying claims through neutral resources filters social media’s emotional influence.

Takeaways

  1. Always double-check correlations: When you hear that one event causes another, think deeply about other possible explanations and refrain from immediate conclusions.
  2. Be precise with data interpretation: Scrutinize percentages and data framing, looking beyond sensational presentation to understand actual implications.
  3. Call out misinformation calmly: While standing against falsehoods, use accurate resources and adopt polite language to sway others effectively rather than alienating them.

Books like Calling Bullshit