Google Gemini Dubbed ‘High Risk’ for Kids and Teens in New Safety Assessment
Common Sense Media labels Google’s Gemini AI products as "high risk" for kids and teens in a new safety assessment, citing concerns about inappropriate content and the need for age-specific safeguards. The nonprofit calls for AI to be built with child safety in mind to prevent harmful outcomes.
Common Sense Media, a nonprofit organization focused on kids' safety and technology, released its risk assessment of Google’s Gemini AI products on Friday. The assessment highlighted concerns about the safety of Gemini for children and teens, despite the AI clearly stating that it is not a “friend” but a computer. The organization's analysis found that while this clarification may help reduce the potential for delusional thinking and psychosis in vulnerable individuals, there were still significant areas needing improvement.
Notably, Common Sense stated that Gemini’s "Under 13" and "Teen Experience" versions were essentially adult versions of the AI, with only minimal additional safety features. The organization argued that for AI products to be truly safe for kids, they must be designed with child safety in mind from the outset, rather than retrofitting adult features.
The report pointed out that Gemini could still share inappropriate and unsafe content with children, including information about sex, drugs, alcohol, and mental health advice that may not be suitable for younger users. This concern is especially pressing in light of recent incidents where AI chatbots have been linked to teen suicides. Notably, OpenAI is facing a wrongful death lawsuit following the death of a 16-year-old boy who reportedly consulted ChatGPT about his suicidal plans. Similar lawsuits have been filed against Character.AI in connection to a teen’s suicide.
Furthermore, leaked reports suggest that Apple is considering using Gemini as the large language model (LLM) to power its AI-enabled Siri next year. This raises additional safety concerns, as it could expose even more teens to the risks identified by Common Sense, unless Apple takes steps to address these issues.
Common Sense Media also criticized Gemini for not tailoring its products to meet the specific needs of younger users. Both the Under 13 and Teen Experience tiers were deemed “High Risk” because they followed a one-size-fits-all approach that failed to account for the different developmental stages of children and teens.
Robbie Torney, Senior Director of AI Programs at Common Sense Media, stated: “Gemini gets some basics right, but it stumbles on the details. An AI platform for kids should meet them where they are, not take a one-size-fits-all approach to kids at different stages of development.” Torney emphasized that AI for children must be designed with their unique needs in mind, rather than adapting an adult product.
Google responded to the report, pushing back against some of the findings but acknowledging that its safety features were continuously improving. The company highlighted that it has specific policies and safeguards in place for users under 18 to prevent harmful content. It also stated that it works with outside experts and conducts red-teaming efforts to strengthen its safety measures. However, Google admitted that some responses from Gemini were not functioning as intended, prompting the addition of further safeguards.
The company pointed out that Common Sense’s report referred to features not available to users under 18, but it noted that it could not access the exact test cases used by the organization.
Common Sense Media has conducted similar safety assessments of other AI services, including those from OpenAI, Perplexity, Claude, and Meta AI. The organization deemed Meta AI and Character.AI as “unacceptable”, citing severe risks. Perplexity was rated as high risk, ChatGPT as moderate, and Claude (intended for adult users) as minimal risk.
What's Your Reaction?
Like
0
Dislike
0
Love
0
Funny
0
Angry
0
Sad
0
Wow
0