Book cover of Understanding Artificial Intelligence by Nicolas Sabouret

Nicolas Sabouret

Understanding Artificial Intelligence

Reading time icon13 min readRating icon3.8 (19 ratings)

Are we creating thinkers or just better tools? Dive into the reality of Artificial Intelligence and discover why machines aren't as smart as we think.

1. AI is not truly intelligent

AI, or Artificial Intelligence, often gives off the impression of being smart, especially when it performs impressive tasks like playing chess or assisting with voice commands. However, AI is still a tool—a sophisticated one—but not intelligent like humans. AI does not understand or think creatively and simply follows instructions programmed by humans.

This misconception arises from AI's ability to handle data and produce results that appear "intelligent." For example, a calculator can produce mathematical results faster than a person, and AI can respond to queries in a way that seems thoughtful. However, these activities are purely mechanical and rely on pre-written algorithms rather than conscious judgment or decision-making.

Alan Turing's famous Turing Test tried to measure this by checking if computers could mimic human responses convincingly. Though some AI programs can pass limited tests, it's not because they are intelligent but because they are highly optimized for their specific tasks. AI lacks human traits like reasoning, creativity, and intuition.

Examples

  • A chatbot mimicking a human during online conversations is not "thinking"; it follows patterns from a dataset.
  • A GPS calculates optimal routes using stored maps and heuristics, without truly "knowing" geography.
  • A chess AI like Deep Blue beats grandmasters but doesn't understand the concept of winning or strategy beyond computations.

2. Algorithms are the essence of AI

At the heart of AI are algorithms, which are sets of rules or instructions created to solve specific tasks. An algorithm is essentially like a cooking recipe, guiding AI on how to approach and resolve problems. These steps are designed by humans and executed by computers to achieve certain outcomes.

Algorithms are not infallible. Their effectiveness depends on their design and the quality of data they receive. For instance, if incorrect or flawed data is fed into an AI system (commonly referred to as "garbage in, garbage out"), the results will also be flawed. Moreover, some tasks are so complex that even the best algorithms struggle or take an unfeasibly long time to compute a solution.

Despite advancements, AI algorithms have their limits. For example, scheduling problems, such as optimizing school class timetables with numerous variables, can quickly grow too complex for even the fastest machines.

Examples

  • Facial recognition software may incorrectly identify someone due to poor image quality, showcasing algorithm limitations.
  • A machine-learning program trained on biased or incomplete data will likely produce biased predictions.
  • A trip planning AI needs to sift through countless possible routes and often relies on approximations to save time.

3. AI is only as good as its data

Machine learning is a significant component of modern AI, allowing programs to "learn" and improve over time. But this learning is not the same as human learning; instead, it depends entirely on data. The quality, quantity, and relevance of data determine how well—or poorly—AI performs.

When programmers feed an AI system data, the AI adapts based on patterns and rules discerned in that data. For example, AI used in credit applications might analyze thousands of past loans to predict whether someone is eligible for credit. But if historical data is biased or incomplete, such as not considering certain demographics, the AI's predictions will also be flawed.

AI learning has limits. It does not truly understand or deduce relationships outside the given data. This dependency reveals how fragile machine learning can become under poor or manipulative datasets.

Examples

  • In 2016, a chatbot AI on social media began producing inappropriate content after being fed biased user inputs.
  • An AI hiring tool might inadvertently discriminate against certain candidates if past hiring data include existing biases.
  • Image recognition models can mistakenly label objects if trained on datasets lacking diversity (e.g., identifying a wolf as a husky based on snow in the background).

4. Human intelligence vs machine processing

Human intelligence is defined by creativity, problem-solving skills, and emotions. Machines, on the other hand, excel at following instructions and processing vast amounts of data efficiently. However, machines lack the capability for abstract reasoning or experiential learning.

Humans can evaluate complex, nuanced situations and adapt solutions that aren't based entirely on fixed patterns. For example, an experienced doctor evaluates a patient not just based on symptoms but also their history, lifestyle, and subtle clues. AI, in contrast, only operates within the data supplied to it and does not "intuit" or "feel."

This fundamental difference underscores why AI might outperform humans in technical tasks but never fully replace them where subjective judgment, creativity, or empathy are required.

Examples

  • A calculator performs millions of calculations, but it cannot write poetry.
  • An accountant uses software tools but interprets financial trends through experience.
  • AI like Watson can scan scientific data for cancer research but cannot provide emotional comfort to patients.

5. Heuristics in AI help approximate solutions

AI often falls short of perfection but employs approximation techniques called heuristics to arrive at good-enough solutions. Since solving every problem efficiently can become computationally monstrous, AI uses shortcuts or guesses that are close enough to the optimal answer.

For instance, mapping software like GPS systems doesn't always give the fastest route but provides one that is adequate for the user's needs. Heuristics sacrifice accuracy for speed in many AI applications, accepting that being precise is not always necessary.

By focusing on practicality over perfection, heuristic-based AI methods have unlocked significant capability for real-world use cases.

Examples

  • Navigation apps choose "acceptable" routes without testing every possible alternative.
  • Voice assistants prioritize common command interpretations instead of exhaustive parsing.
  • A robot vacuum cleaner "learns" efficient cleaning paths through trial and error instead of mapping the entire room initially.

6. The divide between Weak AI and Strong AI

Weak AI refers to systems designed for specific tasks, such as virtual assistants or game-playing bots. They are highly specialized but still lack understanding or consciousness. Strong AI, which could match or exceed human intelligence, remains purely theoretical.

Strong AI has two potential goals: general AI, capable of handling diverse problems, and artificial consciousness, capable of experience and awareness. The latter would require defining what "consciousness" truly means—a challenge itself.

For now, the focus remains on Weak AI, with advancements helping solve real-world challenges.

Examples

  • Siri (Weak AI) answers questions but doesn't understand context beyond predefined rules.
  • General AI would involve creating a robot capable of performing tasks that range from cooking to legal reasoning.
  • Artificial consciousness would add self-awareness to machines, but this concept remains speculative.

7. AI doesn't threaten humanity

Despite popular sci-fi themes, there is little risk of AI gaining free will and enslaving humans. Strong AI with consciousness is currently out of reach. Weak AI's tasks are limited to what has been programmed or trained by humans—it cannot "decide" to harm on its own.

Still, misuse of AI by humans is a concern. Authoritarian governments could use AI to suppress freedoms, while criminals use it for cyber hacking. The danger lies in people exploiting AI, not AI itself becoming dangerous.

Examples

  • A hacking tool using AI can breach cybersecurity defenses faster than manual methods.
  • Autonomous weapons programmed for specific combat scenarios could misuse AI unintentionally.
  • Social media algorithms can be used for disinformation campaigns if manipulated.

8. AI's computational limits

Despite advances, AI faces real barriers due to computational requirements. Some problems are so vast in scope that AI cannot calculate solutions within a practical timeframe. This mismatch between problem complexity and computer capability influences outcomes.

Solutions often involve returning simpler approximations instead of exhaustive calculations.

Examples

  • Scheduling algorithms for large-scale tasks often struggle with time constraints.
  • Protein-folding simulations, vital for biology, require vast computing power to model accurately.
  • Simulating climate change involves so much data that even supercomputers cannot provide real-time predictions.

9. AI helps us reflect on humanity

AI development forces people to analyze how humans think and perform tasks, ultimately deepening our self-awareness. By mimicking human processes, AI pushes boundaries that lead to better understanding human cognition, ethics, and creativity.

This exploration benefits fields beyond AI, driving advancements in neuroscience, linguistics, and psychology.

Examples

  • Self-driving car development involves studying human driving habits.
  • AI-imaged artwork calls attention to what makes human-created art unique.
  • Neuroscientists use AI models to better understand how the human brain processes information.

Takeaways

  1. Embrace AI tools as assistants, not replacements, for decision-making.
  2. Prioritize the ethical use of AI by ensuring fair and unbiased data inputs.
  3. Keep advancing human creativity and judgment, as AI still relies on these traits for guidance.

Books like Understanding Artificial Intelligence