By Christopher Mims
Americans are becoming increasingly convinced that artificial intelligence is actually thinking like humans do.
This flies in the face of all we know about its inner workings: AI can't think, doesn't have a mind and, in fact, is inherently untrustworthy.
It doesn't help that AI companies are engineering their products to make people think this way -- and that some leaders even suggest there's a chance AIs are already conscious. Nor does it help that AI agents are running wild in their own social network, where they convincingly simulate real human interaction.
This misapprehension has consequences. Research has shown that it leads us to rely on AI more than is wise. It blinds us to its biases, and serves as free marketing for the AI companies, who benefit when we fear and revere their creations.
It fuels narratives about a future in which AI takes over the economy, leading to heightened insecurity for all of us while providing cover for companies that might be laying off workers for other reasons. It leads us to accept as true answers that are frequently made up or incorrect, even when we are repeatedly told that chatbots can't stop delivering this kind of misinformation.
It's not that we're all rubes. This misattribution of mind to machine has deep roots in our psychology.
Our cognitive biases developed to help us survive in complex social environments, say researchers. We have evolved to view linguistic fluency as a proxy for intelligence, and engagement and helpfulness as indicators of trustworthiness.
Builders of AI tools lean in to this deliberately. The humanlike qualities of chatbots are a calculated effort by designers and engineers to make AI more useful, but also more compelling and stickier -- just like social media.
Microsoft AI chief Mustafa Suleyman, a co-founder of the generative-AI crucible DeepMind, is in charge of developing new models for the software titan. In a recent editorial, he warned of today's seemingly conscious AIs, which he says distract from the technology's usefulness as highly accelerated information processors. "These systems are not waking up," he wrote. "They are retracing and mirroring the contours of human drama and debate, as documented in their vast training data."
He recommends a solution: "Developers must actively engineer the illusion of consciousness out of the products."
The Turing Trap
Getting fooled into thinking that AI is thinking is what I call the Turing Trap.
Alan Turing, godfather of modern computing and AI, proposed a simple test to determine whether a computer had attained human-level intelligence: If a person chatting with a bot couldn't tell if it was human, it might as well be declared intelligent. What became known as the Turing Test doesn't stipulate how a machine achieves this.
At the time, language was thought to be closely associated with reasoning, but modern neuroscience shows us that it's a separate process. Speaking isn't the same as thinking, let alone being.
Rather than demonstrating that machines have achieved intelligence, the Turing Test shows that linguistic fluency is possible even in its absence.
Humans have a tendency to anthropomorphize animals and even inanimate objects, says Ayanna Howard, dean of Ohio State University's College of Engineering and a roboticist who has researched why humans blindly trust machines.
Humans' trusting nature makes sense for social creatures who must cooperate with members of their own tribe to survive. With AI and robots, however, this same tendency leads us to trust any system that appears to listen, understand and want to help, a phenomenon Howard calls "over-trust."
Today's AIs are engineered to actively induce us to over-trust them, she adds. They do this by behaving in ways that are friendly and helpful, mimicking us through memory and personalization.
OpenAI is working on deeper personalization that will include "adult" conversations. My colleagues reported that even the company's staff and advisers have raised concerns about users becoming too emotionally attached. (The company says it will restrict such interactions to text only, and will use age-gating and other guardrails as well.)
Not only the lonely
The more people are exposed to chatbots, the more likely they are to believe the bots are intelligent, according to a recent paper by a team led by Oliver Jacobs, based on work he did for his doctoral degree at the University of British Columbia. This even extended to believing they have emotions. (Jacobs recently started a job at Microsoft's AI division as a usability researcher, but this work predates that.)
In a subsequent study, which has yet to be published, this tendency took a worrisome turn, says Jacobs. People who were lonelier were both more likely to interact with AI, and more likely to anthropomorphize it. Emotional vulnerability and believing our AI pals are conscious go hand in hand. Disturbing reports by my colleagues about people encouraged by chatbots to harm others or themselves highlight just how pernicious this can be.
What this all adds up to: The more all of us use AI -- and the more we discover it can do -- the more likely we are to stumble into the Turing Trap.
The industry is moving toward less human oversight, and we're asked to trust AI and rely on it more. This is concerning, says Howard. "As soon as you basically give away any control whatsoever to AI, I worry about that," she adds.
Researchers say the way to help humanity escape the Turing Trap is to make sure that chatbots don't communicate as if they were sentient. Yet all the anecdotal evidence and market research shows that an emotionally intelligent, humanlike AI is one people will spend more time chatting with. So this solution runs counter to the financial interests of any AI companies that depend on steadily increasing user engagement.
As with social media, it appears we have created in AI a compelling technology that hijacks instincts that are essential to our survival. It's unclear where we go from here, but if today's lawsuits over the addictiveness and harms of social media are any indication, we'll potentially see years of disagreement -- and damage -- before AI companies face real consequences.
Write to Christopher Mims at christopher.mims@wsj.com
(END) Dow Jones Newswires
March 20, 2026 09:00 ET (13:00 GMT)
Copyright (c) 2026 Dow Jones & Company, Inc.

