Artificial intelligence holds a mirror up to humanity, reflecting both our strengths and our darkest flaws.

1. AI Reflects Human Bias

Artificial intelligence, far from being neutral, often reflects the biases embedded in the data it trains on. This dependence on historical and societal norms can lead to unintended consequences, particularly when training datasets are unbalanced or discriminatory. AI that misbehaves isn't inherently flawed — instead, it showcases the limitations and exclusions found in human history.

Take the example of an AI-powered photo categorization system by Google. In 2015, it mislabeled pictures of Black individuals as "gorillas." This incident wasn't just a glitch; it highlighted deeper systemic issues in data collection and algorithm design. These biases are rooted in a history where non-white individuals were either excluded or inaccurately portrayed, reinforcing harmful stereotypes.

Such issues aren't limited to facial recognition software. Joy Buolamwini, a graduate student, discovered that her robot, trained with mostly white, male datasets, failed to recognize her own dark-skinned face. The problem is not the technology itself but the insufficient attention given to diverse and fair representation during its development.

Examples

  • Historical exclusion of Black people in 19th-century photography
  • Google's mislabeling of Black individuals in photo categories
  • A robot failing to identify users due to skewed training datasets

2. Photography and Film Technology Were Initially Racist

Photography's development during the 19th and 20th centuries unintentionally reinforced racial bias. When film technologies were optimized for white skin tones, this decision excluded darker skin tones from accurate representation. The intent wasn’t always malicious but rather a byproduct of ignoring diverse needs.

Frederick Douglass, the most photographed man of his time, believed photography was a powerful tool for Black representation. Yet, the chemical coatings used to process photographic film were designed with white subjects in mind. Hollywood even relied on a white woman named Shirley as the test subject for perfect exposures, leaving little room for proper representation of anyone else.

Not until the 1970s did this exclusion begin to change. Ironically, it wasn’t civil rights advocates who pushed Kodak to adopt better technology for darker skin tones but furniture and candy manufacturers who wanted more accurate representations of their products. Decades’ worth of photographs and films reinforced the erasure and misrepresentation of non-white subjects.

Examples

  • Frederick Douglass’s campaign for authentic representation in photography
  • Hollywood’s reliance on white actresses to standardize film technology
  • Kodak’s improvement of film development prompted by commercial needs, not civil rights

3. Training Datasets Are Often Skewed

Machine learning heavily depends on the quality and diversity of the training data it uses. AI programs are trained using datasets, which serve as their knowledge base. However, when these datasets are limited or biased, the resulting models reflect those shortcomings.

Buolamwini found that common face datasets, such as “Faces in the Wild,” contained predominantly white male images. Less than five percent of images in these datasets were of dark-skinned women. Such skewed ratios explain why facial recognition struggles to identify people outside of its limited training scope, perpetuating inequality in AI applications.

Representation in training data isn’t just an ethical concern; it determines the system's functionality. Developers at IBM, for instance, worked to improve their datasets after Buolamwini’s findings. Reducing errors by tenfold in identifying Black women’s faces demonstrated that fixing biases in AI isn’t impossible, but it requires intentional effort.

Examples

  • Dominance of white male images in training datasets like "Faces in the Wild"
  • Skewed data leading to racism in facial recognition software
  • IBM’s efforts to reduce errors in identifying dark-skinned individuals

4. Training AI on the Internet Amplifies Problems

The internet is often a chaotic and biased repository of human behavior. Many AI programs use online data to train their algorithms, which can unintentionally pass on humanity's prejudices, toxic behavior, and misinformation to machines.

When an AI chatbot like Microsoft’s Tay launched on Twitter, internet users deliberately corrupted it, turning it into a platform for offensive speech. This example shows how AI mimics what it's exposed to, without the ethical filter humans ideally apply. It’s not the AI’s fault; rather, its programming reflects its environment.

Filtering harmful or biased information from massive online datasets is labor-intensive. Developers frequently cut corners to meet deadlines, prioritizing speed over accuracy. This issue reveals the urgent need for greater oversight and ethical considerations in early development stages.

Examples

  • Microsoft’s Tay chatbot adopting offensive language
  • Online datasets containing harmful stereotypes
  • Developers prioritizing rapid deployment over ethical integrity

5. Diversity in Development Teams is Crucial

One way to mitigate bias is by ensuring diverse perspectives in AI development teams. Homogeneous groups may overlook challenges that marginalized communities face, reinforcing inequalities rather than addressing them.

Take Buolamwini’s early robots as an example. The homogeneous makeup of the team developing the open-source code failed to account for diverse representation, leading to issues in facial recognition for darker-skinned individuals. With more varied voices at the table, such oversights are less likely.

Companies like IBM, which seek out and listen to feedback from underrepresented groups, show that improvements are possible when diverse perspectives guide technical development. Greater diversity minimizes blind spots and enriches the final product.

Examples

  • Homogeneous teams failing to recognize diverse user needs
  • Joy Buolamwini’s feedback prompting corrections at IBM
  • More inclusive representation improving technology for broader demographics

6. Cultural Awareness Must Guide AI Development

AI must be built with cultural sensitivity, especially when deployed in a global context. Ignoring cultural differences can result in programs that perform poorly outside their dominant demographic.

For instance, social robots built in one country may fail miserably when introduced in another with different norms and features. Developers must account for geographical, racial, and social differences, training AI to work inclusively across various regions.

Universal inclusion isn’t easy, but case studies where companies made adaptations — such as the Face++ software’s improvements for broader skin-tone representation worldwide — show it’s achievable.

Examples

  • Social robot failures in cross-cultural contexts
  • Regional differences in dataset needs
  • Face++ making global AI suitable for diverse audiences

7. Technological Change Requires Ethical Oversight

AI, like any powerful technology, needs responsible development and regulatory oversight to minimize harm. Without checks and balances, it could cause unprecedented personal, social, and economic problems.

Lack of ethical oversight resulted in the deployment of poorly tested systems, including algorithms determining prison sentences or hiring decisions. These unregulated AI systems entrenched discrimination rather than fixing it. Regulatory bodies and detailed testing models are critical to safeguard fairness.

Industry leaders and governments are working on frameworks to guide ethical AI development. While still incomplete, these efforts could lead to more transparent practices that balance innovation and humanity.

Examples

  • AI algorithms perpetuating racial discrimination in sentencing
  • Mishaps with biased recruitment AI systems
  • Initiatives promoting ethical frameworks for AI oversight

8. History is Key to Understanding AI

AI development doesn't exist in a vacuum; it builds on human history, including its inequities. A failure to reckon with that history results in systems prone to repeating past mistakes in new and technological ways.

Photography’s racist origins, as seen with outdated film processes, demonstrate how long-held practices shape current tools and algorithms. By examining this history, we can identify hidden biases in tech design and create better solutions.

Looking at the past helps foster empathy and encourages AI systems that serve all groups equitably. Without studying this context, we fail to anticipate AI’s risks and consequences.

Examples

  • Historical exclusion from photography influencing current AI
  • Misrepresentation of racial groups in earlier technologies
  • Understanding history fostering inclusive designs

9. Rushed AI Deployment Reduces Safety

The race to launch AI tools creates avoidable risks. Haste often sacrifices thorough testing, meaning flawed products can harm users or amplify biases before corrections are made.

Google Photos’ premature category feature is a glaring example. Had the team spent more time testing the algorithm with diverse datasets, they might have avoided offending and alienating users.

Testing AI thoroughly takes effort, but doing so minimizes harm and improves its reliability. Companies that rush AI into the market may find that public backlash tarnishes their reputations, especially as harmful effects come to light.

Examples

  • Google Photos launching without diverse user testing
  • Companies prioritizing speed over responsible development
  • Failures emphasizing the risks of rushing to market

Takeaways

  1. Developers must curate diverse training datasets that reflect the global population fairly.
  2. Companies should include diverse and underrepresented voices in teams to spot biases early.
  3. Governments and industries must establish ethical frameworks and slow down hasty AI implementation to focus on safety and fairness.

Books like The Alignment Problem