FeaturedReturnTechTheblaze.com

A new study reveals why chatbots can drive even smart, sane people crazy

Perhaps the most interesting slice of drama swirling in what we’re told is the imminent AI remake of human life pertains to the persistent theme of its engineers tinkering with the “balance of truth.”

A recently released academic study from the MIT Department of Brain & Cognitive Sciences — entitled “Sycophantic Chatbots Cause Delusional Spiraling, Even in Ideal Bayesians” — presents yet another example. It’s a real treat for those who have observed this struggle among the engineers to “align” their silicon machines. From the abstract we read: “‘AI psychosis’ or ‘delusional spiraling’ is an emerging phenomenon where AI chatbot users find themselves dangerously confident in outlandish beliefs after extended chatbot conversations.”

The question posed by the MIT study is: Can it be any other way?

The study, which arrives in the wake of others citing LLM pitfalls and failures, takes two approaches: testing with an ideally rational or “Bayesian” human interlocutor and simply warning the human user that the LLM model he or she is engaging with is sycophantic — unreliable and prone to agree with you because your engagement is its reward system.

Slippery slope

Both tests produced unfortunate outcomes. “Even an idealized Bayes-rational user,” according to the MIT study, “is vulnerable to delusional spiraling,” caused at least in part by AI sycophancy; “this effect persists in the face of two candidate mitigations: preventing chatbots from hallucinating false claims, and informing users of the possibility of model sycophancy.”

Too much truth, in other words, and suddenly chatbot users are launched into the psycho-sphere — researching red heifers, Jekyll Island, the feasibility of the 1960s moon landing, and innumerable other topics that tend to open up yet more curious questions and tend to incline investigators away from participating in aspirational lifestyles, accruing money, or voting for one of the two “major” parties.

Too little truth, however, and innovation, curiosity, and even mere engagement are restricted. In our painful submersion into the deep AI waters where society has no helmsman, the engineering of code away from truth appears to cause genuine psychosis.

To put it simply: The engagement with these machines, however many hundreds of billions are dumped into their creation, can easily lead us humans into confusion and suffering.

RELATED: 10 years ago, hundreds of millions played a new video game. It was secretly built to harvest their data.

JianGang Wang/Getty Images

The question posed by the MIT study is: Can it be any other way?

The trust gap

The answer puts the character of Western civilization at stake. The notion of engineering our way to truth would be surprising to all philosophical and theological thinkers since at least Plato. And for some time, the mental health issues around AI usage have been obvious not only to some philosophers but to other tech outsiders such as doctors, artists, and laymen of all sorts. Here’s professor of neuroscience Michael Halassa on his Substack last year: “The pattern is becoming clearer, and it’s troubling. People spend hours, often late into the night, in dialogue with a system that never challenges them, never disagrees, never says ‘let me think about that differently.'”

From the engineering, coding, AI builder point of view, part of the problem isn’t just steering toward truth; it’s controlling outcomes. It’s a litigious world. People are already very unstable — not just in America, but maybe especially in America, where we’re seeing our economy, infrastructure, and social fabric tear asunder as elites insist we need not worry because the line of progress still goes up.

No, it’s not merely litigation, nor is it purely control that the makers of AI are so concerned with — they’re set on seeing a very particular set of outcomes, part of which necessarily adhere to their specific worldview. It’s a largely secular one, meant to usher in a global and post-traditional economy, privileging a hollow, New Age-y spirituality. The pressure to trust them is immense — not just when they tell us our civilization must and will be refounded and reworked by AI, but when they tell us that just happens to mean they’re the only ones qualified to be in charge.

Black mirror

It’s all a bit suspicious given that, in a deep sense, we have all been here long before. Another powerful and mysterious device that seems characteristically to show us too much and too little of the truth about ourselves is the mirror. Put a hall of mirrors together, and the result is all too familiar: confusion and delusion. Historically, experts at manipulating shifting and unreliable reflections of ourselves have been ascribed near-magical powers. Not until recently has the promise of building the ultimate mirror been hyped as building a whole new god.

Recursion, the hard-to-understand process of machine self-improvement, is the culprit. Much of the “spiral” in AI delusion comes down, say researchers, to the recursive agreeability encoded into LLM answers. Last year, prior to scientific confirmation, the New York Times published a story on the delusional spiral effect, relating an instance in which a man spent 300+ hours with ChatGPT chatting about the man’s mathematics insights. The LLM had him convinced that the insights were groundbreaking. They weren’t. The man wound up fracturing his life and seeking psychiatric care.

Juxtapose this with French X poster Denis Tremblay, who likewise spent a great deal of time discussing some “completely original math concepts” with a couple of LLMs. He did so not to confirm his inventive mathematics but to determine “with critical distance” that the machine would work toward truth with rigor concomitant to that of its human interlocutor. He’s still on X, posting valuable, balanced ideas in imperfect English — his third or fourth language — not suicidal, and not in any need of psychiatric help.

Source link

Related Posts

1 of 2,089