Let Generative AI Be Itself, Not an Imitation Human

The first proper post is up on the Substack, Let Generative AI Be Itself, Not an Imitation Human.

Large language models (LLMs) have surprised us with their competence—playing chess, generating essays, even tackling scientific reasoning. But here’s the thing: they’re not thinking like we do. Their competence in tasks like playing chess, solving puzzles, or generating text is often mistaken for reasoning or understanding. They predict responses rather than form independent thoughts or pursue goals, leading to unpredictable errors and confabulation.

Intelligence is not just about processing information inside the brain but also about interactions with the environment. Humans use tools, symbols, and social collaboration to think and create, whereas LLMs rely purely on linguistic prediction. One might say that LLMs (and AI in general) “understand to experience” whereas humans “experience to understand”.

The real opportunity isn’t in making AI more “human” but in letting it be what it is—a powerful language tool. If we stop trying to force AI into a human mold, we can build more effective solutions that enhance, rather than imitate, intelligence. The best applications of AI will combine LLMs with conventional computing techniques, external knowledge sources, and human oversight. This approach avoids the pitfalls of AI anthropomorphism while unlocking new capabilities in automation, augmentation, and problem-solving.

Rather than forcing LLMs into a human-like mold, we should embrace their strengths as language tools. This perspective shift will enable more effective and reliable AI applications while steering clear of unrealistic expectations about artificial general intelligence.

How do we move past our tendency to anthropomorphize AI and put it to work in ways that matter?

Read on: Let Generative AI Be Itself, Not an Imitation Human.