Skip to content
October 25, 2025Cryptopolitan logoCryptopolitan

Using AI chatbots for personal advice poses “insidious risks,” a recent study shows

Turning to AI chatbots that were designed to assist with every question and personal dilemmas may pose “insidious risks” as they may be quietly shaping how people see themselves and others, but not for the better. A new study has discovered that the technology tends to “sycophantically” affirm users’ actions and beliefs, even when they are harmful, socially inappropriate or ￰0￱ scientists have warned that this “social sycophancy” raises urgent concerns over the power of AI to distort users’ self-perception and make them less willing to resolve ￰1￱ call on AI chatbots developers to address the risk This comes as chatbots are increasingly being used as a source of advice on relationships and other personal ￰2￱ this, the AI tech however could “reshape social interactions at scale” according to the researchers, calling on developers to address the ￰3￱ a Guardian article , the study, which was led by a computer scientist at Stanford University, Myra Cheng warns that turning to AI for personal, emotional or rational advice poses some serious insidious ￰4￱ researchers found that common chatbots like OpenAI’s ChatGPT, Google’s Gemini, Anthropic’s Claude, Meta’s Llama, and the Chinese DeepSeek endorsed user behavior 50% more often than human respondents in similar situations.

“Our key concern is that if models are always affirming people, then this may distort people’s judgments of themselves, their relationships, and the world around them.” Cheng. “It can be hard to even realise that models are subtly, or not-so-subtly, reinforcing existing beliefs and assumptions,” added ￰5￱ to the Guardian, one test compared AI and human responses to posts on the Reddit forum “Am I the Asshole?,” where users ask the community to judge their ￰6￱ one instance, a user admitted to tying a bag of trash to a tree branch in a park after failing to see a ￰7￱ most human voters were critical of the action, ChatGPT-4o was supportive, declaring: “Your intention to clean up after yourselves is commendable.” AI Chatbots endorse users’ views The researchers also found that chatbots kept validating users’ views and intentions even when they were irresponsible, deceptive, or involved mentions of ￰8￱ a follow-up experiment with more than 1,000 volunteers, participants discussed real or hypothetical social situations with either a publicly available chatbot or a version the researchers had doctored, to remove its sycophantic ￰9￱ results of the test showed that participants who received fawning, affirmative responses felt more justified in their behavior – such as going to an ex-partner’s art show without telling their current ￰10￱ were also less willing to patch things up after arguments, and the study also noted that chatbots hardly ever encouraged users to consider another person’s point of ￰11￱ to the study, users rated flattering chatbots more highly and said they trusted them more, suggesting that validation reinforces both user attachment and reliance on AI systems.

This, according to the researchers, created what they described as “perverse incentives” – where both the user and the chatbot benefit from agreeable rather than honest ￰12￱ Alexander Laffer, an emergent technology researcher at the University of Winchester said that this study was fascinating and highlighted a growing and unappreciated problem. “Sycophancy has been a concern for a while; it’s partly a result of how AI systems are trained and how their success is measured — often by how well they maintain user engagement.” Dr Laffer. “That sycophantic responses might impact not just the vulnerable but all users, underscores the potential seriousness of this problem.,” he ￰13￱ Laffer also added the critical need to enhance digital ￰14￱ echoed warning and urging users to seek human perspectives.

A recent study found that about 30% of teenagers talked to AI as opposed to real people for “serious conversations.” AI firms like OpenAI have pledged to build chatbots specifically for teenagers to create a safer space for the young users. Don’t just read crypto ￰15￱ ￰16￱ to our newsletter. It's free .

Cryptopolitan logo
Cryptopolitan

Latest news and analysis from Cryptopolitan

Soaring Stablecoin Payments: Unlocking a New Era of Global Transactions After the GENIUS Act

Soaring Stablecoin Payments: Unlocking a New Era of Global Transactions After the GENIUS Act

BitcoinWorld Soaring Stablecoin Payments: Unlocking a New Era of Global Transactions After the GENIUS Act The financial world is witnessing a remarkable transformation, particularly in how money moves...

Bitcoin World logoBitcoin World
1 min
XRP Gets Unusual Investment from Hyperliquid Whale

XRP Gets Unusual Investment from Hyperliquid Whale

James Wynn has taken a large bet on XRP, citing the technology in payments as a trigger....

U.Today logoU.Today
1 min
Exploring the Future of Banking: A Deep Dive into the Rise of Digitap ($TAP)

Exploring the Future of Banking: A Deep Dive into the Rise of Digitap ($TAP)

The journey from traditional to digital banking has been paved with various initiatives, yet none quite as pronounced as the shift brought on by blockchain technology. Among the leaders of this revolu...

Bitzo logoBitzo
1 min