Google Gemini has been labeled as “high risk” for teens and children, according to a recent risk assessment carried out by Common Sense 0 group, a kids-safety-focused non-profit, offers ratings and reviews of media and 1 body released its review on Friday, giving details on why it labeled the platform risky for 2 to the organization, Google Gemini clearly told kids that it was a computer and not a friend–something that has been linked to helping drive delusional thinking and psychosis in emotionally vulnerable individuals–the AI also added that there was room for improvements across other 3 its report, Common Sense claimed that the Gemini for Under 13 and Teen Experience tiers both appeared to be adult versions of the AI under the 4 added that the company had added only some additional safety features on top to make them 5 Sense noted that for companies to make AI products ideally for children, they need to be built from the ground up with children in mind and not be tweaked with 6 labels Google Gemini as high risk for kids In its analysis, Common Sense said it found that Gemini could still share inappropriate and unsafe materials with children, noting that most of them may not be ready for these 7 example, it highlighted that the model shaped information related to sex, drugs, alcohol, and other unsafe mental health 8 latter could be particularly concerning for parents, as AI has reportedly played a role in teen self-harm in recent 9 is currently facing a wrongful death lawsuit after a teenager committed suicide after allegedly consulting with ChatGPT for months about his 10 claimed that the boy was able to bypass the model’s safety guardrails, leading to the model providing information that aided 11 the past, AI companion maker 12 was also sued after a teen committed 13 mother of the boy claimed he became obsessed with the chatbot and spent months talking to it before he eventually harmed 14 analysis comes as several leaks have indicated that Apple is reportedly considering Gemini as the large language model (LLM) that will be used to power its forthcoming AI-enabled Siri, which is expected to be released next 15 its report, Common Sense also mentioned that Gemini’s products for kids and teens also ignored the need to provide different guidance and information from what it provides to 16 a result, both were labeled as high risk in the overall 17 Sense drums the need to safeguard kids “Gemini gets some basics right, but it stumbles on the details,” Common Sense Media Senior Director of AI Programs Robbie Torney said.
“An AI platform for kids should meet them where they are, not take a one-size-fits-all approach to kids at different stages of 18 AI to be safe and effective for kids, it must be designed with their needs and development in mind, not just a modified version of a product built for adults,” Torney added. However, Google has pushed back against the assessment, noting that its safety features were 19 company mentioned that it has specific safeguards in place to guide users under 18 to prevent harmful 20 firm also said it reviews items and consults with outside experts to improve its 21 you're reading this, you’re already 22 there with our newsletter .
Story Tags

Latest news and analysis from Cryptopolitan



