The biggest myth of so-called “artificial intelligence”

Apple just exposed the biggest myth of so-called “artificial intelligence”, or “advanced statistics” as it should be called!

Published now in June 2025 by Apple, the research titled “The Illusion of Thinking: Understanding the Strengths and Limitations of Reasoning Models via the Lens of Problem Complexity” shows that the most advanced “AI” models, such as Claude, DeepSeek and GPT, in fact do not think!

They only follow patterns they’ve seen before. When given a new and more difficult problem, they basically “freeze”, even if you give them the step-by-step solution.

Apple created a set of brand-new logic puzzle tests (like Tower of Hanoi and river crossing challenges), avoiding benchmarks that models had already been trained on.

And what happened???

  • Easy problems: even simple models did well.
  • Medium problems: the models that “think” performed a bit better.
  • Hard problems: ALL failed.

Even with more computing power, it didn’t help…

Apple’s conclusion is simple:

  • These models are not truly intelligent, they’re just good at memory.
  • When facing a new challenge, they can’t truly reason.
  • In other words, we are still far from AI that thinks like a human being…

But most importantly: What does this mean for the future of AI?

The industry is lost in hype and frenzy believing that humans will be replaced by AI, making mass layoffs and betting billions of dollars, not to mention the destruction of the environment with the return of nuclear plants to meet the energy demand, but Apple calls it “false reasoning”.

We’re stacking more data, more chips and more hype on systems that still don’t understand what they’re doing.

As Linus Torvalds, creator of Linux, said:

“There’s nothing intelligent about it.”

And this changes everything:

  • We’re not getting closer to AGI (artificial general intelligence).
  • We’re seeing the limits of memorization, not the birth of artificial consciousness.

This research doesn’t discredit AI. But it brings an essential warning: we need to stop confusing “advanced statistics” with “intelligence”.

Apple may be late in the AI race, but perhaps it’s right to question where this race is going. And perhaps they’re avoiding boarding the technological Titanic of the 21st century.

In one of my articles, published in early 2024, inspired by the thinking of Noam Chomsky, I already argued that LLMs (large language models) do not understand language nor reason, they simply operate on word statistics and sequence probabilities.

Chomsky has always criticized the idea that models based on big data represent a real form of cognition.

And Apple’s research now confirms this criticism with concrete data.

Ighor Toth

Deixe um comentário

O seu endereço de e-mail não será publicado. Campos obrigatórios são marcados com *