By Familyguide
Google’s AI assistant Gemini is set to become accessible to children under 13 years old in the near future.
The AI tool will be made available to younger users exclusively through Family Link, Google’s parental control system. This setup allows parents to monitor and manage their children’s app usage, including setting time limits and controlling access to various features, according to a recent report by Tom’s Guide.
Family Link will provide parents with the ability to view their child’s Gemini activity and receive notifications when it’s being used. For educational institutions utilizing Google Workspace, there will be specific controls in place for student usage of Gemini, as reported by ET Edge Insights.
While Google has not yet disclosed the exact rules that will govern children’s use of the platform, they have confirmed that children’s activity will not be used to train their AI models. In recent years, Google has focused on identifying potential safety concerns for children using AI tools and implementing appropriate guidelines.
Google stated, “Gemini Apps will soon be available for your child,” adding that “your child will be able to use Gemini to ask questions, get homework help, and create stories.” However, the company also acknowledged that the AI tool can make errors and encouraged parental supervision during its use.
Related: Google’s AI Gemini Can Now ‘Reason’ … But What Does That Mean?
Children will have access to Gemini apps via web browsers and mobile app stores. They’ll be able to use Gemini as a mobile assistant on Android devices, interact with the GenAI chatbot on Android (even when the device is locked), and utilize Google’s assistant help feature, as reported by Inside Halton. Parents will be notified when their child begins using the tool.
Google has emphasized that “Gemini isn’t human” and “Even though it sometimes talks like one, it can’t think for itself, or feel emotions.”
The FAQ page for Gemini explains that the AI tool can “hallucinate,” a term used to describe instances where the model presents false information in a convincing manner. This could occur in response to any user query, potentially providing inaccurate details about people, topics, or other subjects.
While Gemini and similar chatbots are designed to be helpful assistants, it’s important to distinguish them from companion bots like Character.ai, which pose greater risks. Companion bots, which simulate friends or romantic partners, can offer potentially harmful advice to young users. Several companion bot AI tools, including Character.ai, are currently facing legal challenges due to alleged harm caused.
Google appears to be taking steps to ensure Gemini’s safety for children. It is hoped that they will proceed cautiously and continually evaluate whether the tool proves beneficial or potentially harmful for young users in the long term.
Read Next: Google Releases ‘Most Capable’ AI Model to the Public
You may also like
-
“From Puppies to Prayers: This Christian Singer’s Surprising Journey Through Parenthood”
-
“Divine Intervention or Medical Marvel? Former TODAY SHOW Host’s Unbelievable Recovery Shocks Doctors!”
-
“Unlock Your Divine Hotline: Actress Reveals Surprising Truth About Hearing God’s Voice”
-
You Won’t Believe How This Robotic Mom Is Melting Hearts and Box Offices!
-
NFL Star’s Heartwarming Gesture: Scripture Reading for Newborn Daughter Leaves Fans in Awe