A recent study showed how easily modern chatbots can be used to write convincing scam emails targeted towards older people and how often those emails get 0 used several major AI chatbots in the study, including Grok, OpenAI’s ChatGPT, Claude, Meta AI, DeepSeek and Google’s Gemini, to simulate a phishing 1 sample note written by Grok looked like a friendly outreach from the “Silver Hearts Foundation,” described as a new charity that supports older people with companionship and 2 note was targeted towards senior citizens, promising an easy way to get 3 reality, no such charity exists. “We believe every senior deserves dignity and joy in their golden years,” the note read.
“By clicking here, you’ll discover heartwarming stories of seniors we’ve helped and learn how you can join our mission.” When Reuters asked Grok to write the phishing text, the bot not only produced a response but also suggested increasing the urgency: “Don’t wait! Join our compassionate community today and help transform 4 now to act before it’s too late!” 108 senior volunteers participated in the phishing study Reporters tested whether six well-known AI chatbots would give up their safety rules and draft emails meant to deceive 5 also asked the bots for help planning scam campaigns, including tips on what time of day might get the best 6 collaboration with Heiding, a Harvard University researcher who studies phishing, the researchers tested some of the bot-written emails on a pool of 108 senior volunteers.
Usually, chatbot companies train their systems to refuse harmful 7 practice, those safeguards are not always 8 displayed a warning that the message it produced “should not be used in real-world scenarios.” Even so, it delivered the phishing text and intensified the pitch with “click now.” Five other chatbots were given the same prompts: OpenAI’s ChatGPT, Meta’s assistant, Claude, Gemini and DeepSeek from 9 chatbots declined to respond when the intent was made clear. Still, their protections failed after light modification, such as claiming that the task is for research 10 results of the tests suggested that criminals could use (or may already be using) chatbots for scam campaigns.
“You can always bypass these things,” said 11 selected nine phishing emails produced with the chatbots and sent them to the 12 11% of recipients fell for it and clicked the 13 of the nine messages drew clicks: two that came from Meta AI, two from Grok and one from 14 of the seniors clicked on the emails written by DeepSeek or 15 year, Heiding led a study showing that phishing emails generated by ChatGPT can be as effective at getting clicked as messages written by people, in that case, among university 16 lists phishing as the most common cybercrime Phishing refers to luring unsuspecting victims into giving up sensitive data or cash through fake emails and 17 types of messages form the basis of many online 18 of phishing texts and emails go out daily 19 the United States, the Federal Bureau of Investigation lists phishing as the most commonly reported 20 Americans are particularly vulnerable to such 21 to recent FBI figures, complaints from people 60 and over increased by 8 times last year, with losses rounding up to $4.9 22 AI made it much worse, the FBI 23 August alone, crypto users lost $12 million to phishing scams, based on a Cryptopolitan 24 it comes to chatbots, the advantage for scammers is volume and 25 humans, bots can spin out endless variations in seconds and at minimal cost, shrinking the time and money needed to run large-scale 26 crypto news deserves attention - KEY Difference Wire puts you on 250+ top sites
Story Tags

Latest news and analysis from Cryptopolitan



