Artificial intelligence is neither artificial nor intelligent; it is grounded in the raw, complex, and often exploitative realities of our physical world.
1. The Illusion of AI's Intelligence
AI may seem like a marvel of modern intelligence, but its operations are far from independent thought. It is not capable of the reasoning or understanding that defines human cognition. Instead, AI relies on training through massive datasets and rules predetermined by humans.
The story of Clever Hans, a horse that appeared to solve math problems but was merely responding to human cues, reflects our tendency to misinterpret signals as intelligence. Similarly, when we see AI performing tasks, we might attribute human-like reasoning, overlooking the human biases and goals embedded in its programming. This highlights the risks of overestimating AI's capabilities.
AI's seeming "intelligence" is nothing more than pattern recognition within predefined limits. This bounded functionality means that while AI excels in controlled environments, it struggles with fluidity, adaptability, and rich context—the hallmarks of human intelligence. Misunderstanding this has consequences for how we apply and trust AI in critical contexts, like medicine or criminal justice.
Examples
- Clever Hans responded to human body language rather than solving problems.
- Language models generate text based on word probabilities from extensive datasets but lack true understanding.
- AI often amplifies existing biases present in the data, misrepresenting reality.
2. AI's Hidden Extraction Network
Behind the polished facades of AI lies a physical infrastructure that extracts minerals, labor, and resources from our planet at great human and environmental expense.
Places like Silver Peak in Nevada, where lithium mining powers modern devices, are the backbone of AI's energy-intensive operations. This global network relies on minerals such as rare earth metals in Inner Mongolia and tin from Indonesia. These activities disrupt ecosystems, displace communities, and often subject workers to hazardous conditions. Such impacts rarely enter the conversation about AI's benefits.
The parallels between today's AI industry and past resource booms reveal repeating patterns of exploitation. Like gold and silver extraction during the 19th century, today's AI draws on vast supplies of labor and materials hidden from sight. This underscores a pattern of growth that prioritizes wealth for a few at the expense of many.
Examples
- The Nevada lithium mines that fuel batteries for AI-dependent devices.
- Unregulated labor in countries mining AI's base materials.
- Data centers consuming more electricity than entire nations, exacerbating climate concerns.
3. The Environmental Costs of AI
AI's growing dependence on resource extraction collides with the environmental crisis, positioning itself as an unsustainable force.
Data centers integral to AI consume vast amounts of energy, much of which still originates from fossil fuels. The environmental toll includes soaring energy demands and intensified mining activities to source materials for cloud storage and device batteries. As AI expands, its environmental footprint accelerates, contrary to tech companies' clean energy pledges.
The AI industry continues to operate on what could be termed a "delusion of green technology." Despite claims of sustainability, today's AI systems rely on finite resources, leaving lasting ecological damage. AI's rise appears set to deepen the divisions between promises of progress and actual environmental realities.
Examples
- Many data centers run on coal-driven grids despite clean energy imagery in advertising.
- Lithium mining devastates habitats, introducing toxic runoff to local water supplies.
- The carbon emissions of data centers resemble those of mid-sized industrialized nations.
4. Digital Labor and Inequality
At the heart of AI is a dependence on human labor that amplifies social inequalities even as it promises automation.
The data labeling required to train AI relies heavily on global networks of low-wage workers, with individuals in poorer countries doing repetitive online tasks such as categorizing photos or transcribing speech. Similarly, the manufacturing workforce behind AI—from assembling devices to producing chips—operates in harsh environments under exploitative conditions.
By concentrating profits in wealthy tech hubs but distributing labor across underprivileged regions, AI worsens the gap between tech elites and marginalized workers. It raises questions about the ethics of the so-called "progress" AI delivers.
Examples
- Gig workers in Southeast Asia label images for machine learning at subsistence wages.
- Device factory workers, particularly in Asia, endure dangerous working conditions.
- AI enriches corporate monopolies while leaving low-wage contributors behind.
5. The Data Gold Rush
AI's hunger for data has fueled a new era of extraction—this time, targeting human activity and expressions.
Massive datasets used to train AI often originate from unethical practices like scraping public images, text, or videos without consent. This approach mirrors resource extraction, treating human expressions as raw materials to exploit. The unregulated rush to collect data means tech companies scrape blogs, media, and social platforms, erasing the contexts in which they were created.
By disconnecting data from its origins, the tech industry creates systems rife with errors and biases. This leads to flawed AI applications in areas like policing, where biased training data can have harmful effects on marginalized communities.
Examples
- ImageNet's database scraped millions of photos without user consent.
- IBM's early language models relied on documents like legal transcripts, disregarding privacy.
- AI in predictive policing reproduces systemic racial biases embedded in its datasets.
6. The Politics of Classification
AI classifications, far from being neutral, reflect the biases of their creators, reinforcing stereotypes and systemic inequalities.
The process of categorizing images or identifying traits relies on historical and cultural assumptions. For instance, training data often includes offensive labels, harmful to marginalized groups. AI developments fail to reflect nuanced understandings, instead proliferating stereotypes.
The reliance on fixed categories mirrors pseudoscientific endeavors like 19th-century skull measurement practices. These outdated views persist in modern AI, encoding oppressive systems into technology under the guise of objective algorithms.
Examples
- ImageNet categories label people with derogatory and biased terms like "kleptomaniac."
- Datasets often reflect a Eurocentric perspective, marginalizing non-Western identities.
- Facial-recognition systems exaggerate inaccuracies for darker-skinned individuals.
7. AI and the Surveillance Economy
AI thrives on surveillance, which intrudes into personal lives, eroding privacy while enriching corporations.
To power its functions, AI collects enormous amounts of personal data from phone use, social media interactions, and surveillance cameras. This invasive practice turns human lives into commodities while benefiting private tech companies. Over time, this data extraction infrastructure expands its scope, reaching into public spaces and private homes.
Data collection has given corporations unprecedented insight into behavior trends, enabling manipulative tactics in advertising and politics. In the process, individuals lose rights over their digital identities.
Examples
- Social media algorithms track user preferences and exploit these for targeted advertisements.
- Smart home devices collect sensitive information from users for corporate profit.
- Governments adopt biased facial recognition technologies for widespread surveillance.
8. AI's Role in Reinforcing Power Structures
Far from neutral, AI empowers already dominant groups, centralizing wealth and influence while marginalizing others.
Tech giants dominate AI development, securing most of the profit while wielding immense influence over how AI enters society. These companies' power shapes markets, laws, and cultural norms, often in ways that deepen systemic biases. AI's benefits are distributed unequally, mirroring and magnifying existing hierarchies.
The unchecked authority of tech monopolies must be questioned, as it often bypasses ethical considerations or regulatory constraints to pursue profits.
Examples
- Social media platforms optimize algorithms to maximize corporate gains, not public transparency.
- Tech lobbying ensures that industries operate with little oversight.
- AI in hiring algorithms reinforces workplace discrimination based on biased training sets.
9. Rethinking Progress
The author suggests that placing AI's industry under scrutiny opens up pathways to redefine progress altogether.
Rather than tying technological advances to extraction, exploitation, and surveillance, society can pursue frameworks that prioritize ethics, sustainability, and social equity. Systems can be developed to prioritize public welfare instead of corporate profits if accountability and transparency standards hold steady.
In envisioning alternatives, the book challenges readers to rethink the role of computation within the broader human experience. Technology must align with equitable principles to avoid becoming a tool of harm.
Examples
- Renewable energy-powered data centers reduce the strain on the environment.
- Developing ethical guidelines centered on community values for new AI tools.
- Greater representation from marginalized groups within AI decision-making processes.
Takeaways
- Demand transparency from tech companies about their data collection practices, algorithms, and supply chains.
- Advocate for policies that regulate AI's environmental impact and labor ethics, ensuring technologies are developed sustainably and fairly.
- Cultivate critical thinking about the biases and systems embedded in AI to make ethical decisions about its use in society.