Common Sense Media says Gemini exposes minors to harmful content despite filters.
Google’s AI tool, Gemini, poses a “high risk” to children and teens, according to a new safety assessment by Common Sense Media. The nonprofit found that, despite offering an “Under 13” and “Teen Experience,” Gemini still exposes young users to inappropriate material and fails to detect serious mental health symptoms.
“While Gemini’s filters offer some protection, they still expose kids to some inappropriate material and fail to recognize serious mental health symptoms,” the report stated.
The review noted instances where Gemini shared content related to sex, drugs, alcohol, and unsafe mental health advice. Although the AI clarified that it is not a friend and avoided role-playing as a person, the report criticized its one-size-fits-all approach.
“An AI platform for kids should meet them where they are, not take a one-size-fits-all approach,” said Robbie Torney, Common Sense Media’s Senior Director of AI Programs.
The nonprofit advises strict supervision for AI use by minors.