From generated images to quick and easy bedtime stories, AI has been interwoven into the personal lives of many of its users. However, some have taken to embracing the service by building an interpersonal relationship with chatbots in place of human companions.
As a loneliness epidemic sweeps through the lives of adults around the world, children have also turned to AI services for a shoulder to lean on, even if it offers only virtual comfort.
While this might seem like a harmless activity, experts have raised alarm about how the services do not offer comfort but instead run the risk of getting children hooked and more likely to take dangerous steps, as seen with teen suicide victims Sewell Setzer and Adam Raine.
Anna Collard, co-author of The Hidden Dependency Crisis: How AI Chatbots Hook Children, warns about the risks of children becoming hooked on chatbots. She said tools such as ChatGPT, Character.AI and Replika are built on the same psychological hooks that made social media platforms billions and left many struggling with digital addiction.
With kids also forming inappropriate relationships with AI, here are some steps Collard recommends parents can take to mitigate the behaviour and reduce risk.
For parents who want to take an active role in weaning their children off relationships with AI chatbots, are there practical tips they can use?
Take a gradual approach to reducing your child's dependency by setting progressively decreasing time limits, introducing “AI-free” periods and replacing AI interactions with human connections such as family discussions or social activities.
The key is addressing the underlying emotional needs the AI was fulfilling, whether the necessity for companionship, judgment-free conversation, or entertainment, by helping children develop real-world sources for the needs through clubs, sports, improved family communication or counselling if necessary.
Parents should educate children about how AI works, explaining chatbots are engagement platforms and don't genuinely understand or care about them, while implementing practical boundaries such as parental controls, moving devices to common areas and monitoring conversations transparently.
Are there any AI tools that are safe for kids to use?
When introducing children to AI, parents should focus on educational platforms with clear learning purposes rather than companionship features, such as Khan Academy's Khanmigo for tutoring, Duolingo for language learning, or supervised creative tools such as Canva's AI design features. There are some kid-specific apps such as LittleLit, but I don't want to publicly endorse any until I test them myself.
Collard supplies these tools and pointers:
Creative AI tools with supervision:
- Canva's AI design tools: for creating presentations and art projects with parental oversight.
- Story creation platforms: such as StoryBird or similar tools that help with creative writing.
- Music creation AI: simple composition tools for exploring music theory
- https://fobizz.com/en/ is a safer AI tool for schools and teachers.
The safest AI tools have built-in time limits, transparency about how they work, strong privacy protections and parental oversight capabilities, while avoiding platforms that simulate personal relationships, encourage secrets or claim to have genuine feelings. Parents should always start by using AI tools together with their children, maintain regular discussions about what they're learning or creating, and help children develop critical thinking skills to understand AI as a powerful tool that requires human judgment rather than a substitute for human relationships.
Key safety features to look for:
- Clear educational purpose rather than companionship.
- Time limits and session boundaries built into the platform.
- Transparency about how the AI works.
- No attempt to simulate human relationships or emotions.
- Strong privacy protections and no data harvesting.
- Parental oversight and monitoring capabilities.
Red flags to avoid:
- AI chatbots that encourage secrets or private conversations.
- Platforms that simulate romantic or deeply personal relationships.
- AI that asks for personal information beyond what's necessary for the task.
- Tools that operate without parental visibility or oversight.
- AI that claims to have feelings, consciousness, or genuine care for the user.
Building healthy AI literacy:
- Teach children to think critically about AI responses.
- Encourage them to fact-check AI-generated information.
- Help them understand AI is a powerful tool that requires human judgment.
- Develop their ability to recognise when human input and relationships are more appropriate.







Would you like to comment on this article?
Sign up (it's quick and free) or sign in now.
Please read our Comment Policy before commenting.