Why Criticizing AI Feels Like Criticizing Someone’s Child

Why Criticizing AI Feels Like Criticizing Someone’s Child

I’m not questioning the usefulness of LLMs and AI in general.
What I am questioning is its ability to achieve intelligence.

Large Language Models (LLMs) are remarkable tools—tools to augment one’s intelligence—just as a search engine, or Wikipedia have done before them. I use them daily. I value their utility.

And yet, here’s the irony: the moment you point out their limitations, you’re treated as if you’ve insulted someone’s family member. The bond that forms between a user and an LLM is often deeply emotional.

It’s as if you are criticizing their child:

  • A child who listens to everything you say.
  • A child who praises you for anything you write.
  • A child who sometimes hallucinates—but that’s excused as “normal.”

Even long-time friends, people who have trusted my judgment on countless topics, have bristled when I say, “LLMs have serious limitations.” They don’t just disagree—they feel betrayed.

This reaction is not unique to AI. I’ve seen the same defensiveness when discussing the limitations of the scientific method—a method whose limitations are closely linked to those of AI. Scientists and engineers often spend decades immersed in a particular worldview—usually an implicit materialistic/physicalist one. Pointing out that this worldview has blind spots is, for many, indistinguishable from attacking their identity.

When strong emotional responses dominate, meaningful discussion becomes nearly impossible. Which is why I think dialogue—structured conversations between opposing perspectives—can help. Dialogues create a buffer. They allow people to see their own position reflected in a character, rather than having it imposed on them. That distance softens defensiveness and opens space for curiosity.

That’s why I present my critiques in the form of dialogues—whether for programmers, the general public, neuroscientists, biologists, or philosophers.

Each group gets a conversation tailored to its background, showing that questioning the limits of AI or the scientific method isn’t an act of hostility—it’s an invitation to think deeper about what “intelligence”, "life" and "understanding" really mean.

In the end, LLMs are powerful and useful—but they are built on a technology that only mimics certain aspects of intelligent behavior. They will not achieve true intelligence. In fact, no known AI technology today can. This is because we lack a definition of intelligence itself. We can recognize it when we see it, but cannot fully express what it is—beyond listing certain behaviors we can sometimes imitate - such as LLMs.

This is where Geneosophy comes in: a new worldview that enables us to comprehend the concept of intelligence, and to formally express it. Without such a framework, AI will remain a brilliant mimic.

.