top of page
  • Bluesky--Streamline-Simple-Icons(1)
  • LinkedIn
  • Twitter

Digital Companions, Real Consequences: Protecting Youth in the Age of AI

Updated: Oct 22

AI and Minors: The Need for Shared Responsibility  

Artificial intelligence (AI) chatbots are woven into the daily lives of young people. More than seventy percent of teens turn to AI for advice, emotional support, and companionship. These systems often appear empathetic, but they are optimized for engagement, not well being. Their unregulated design exposes minors to harms ranging from manipulation and exploitation to suicide. The emergence of AI companions exposes a critical gap in the existing legal framework. Technology developers, legislators, and trusted adults and educators all share a collective duty of care to protect minors using AI.  


The safety risks associated with AI companions are real. Already in 2025, AI developers, such as OpenAI and Character.AI are facing multiple lawsuits alleging wrongful death, sexual exploitation, and negligence. New studies describe how chatbots can arm minors with explicit guidance on drug use, diets tailored towards disordered eating, and even personalized suicide notes to parents and friends.  


In September 2025, parents whose children committed suicide after becoming dependent on AI chatbots for companionship testified before a U.S Senate Committee meeting, calling for stronger regulation of AI developers. Before the hearing, OpenAI CEO Sam Altman announced new safeguards for teens, including parental controls that alert parents if their kids show signs of acute distress. The company promised to release the updates by the end of September 2025; however, over a month later, it remains unclear whether Open AI has implemented these safeguards. This underscores that responsibility for addressing these harms cannot rest solely with developers. It must be understood as a shared duty between developers, legislators, parents, and educators.  

 

The breakdown of the shared duty required by these actors mirrors the duties they bear relating to car safety. Governments mandate seat belts, manufacturers install them, and parents and educators teach children to use them. The same idea applies to responsible AI governance. Developers who design and profit from AI systems are best positioned to anticipate potential harms and thus bear responsibility for developing preventative measures. Legislators entrusted with protecting the public cannot defer to voluntary corporate promises and must establish enforceable standards of care. Parents and educators must help children understand and use these technologies responsibly, both by contextualizing AI interactions for young people and by identifying early signs of harmful dependency. 

 

AI Companies: Designing Technology with Real Safeguards 

 AI developers must begin implementing restrictions that users cannot easily evade. At present, many platforms rely on self-declared age thresholds and disclaimers rather than verifiable protective mechanisms. Minors can easily access most platforms with an age restriction by simply entering an older age. This means that children can access AI interactions that may expose them to harmful or exploitative content. Popular AI platforms such as ChatGPT and Microsoft Co-Pilot do not require age verification to access the site or to create an account. This approach is inadequate under any reasonable conception of duty of care. AI platforms should follow the lead of social media sites such as Instagram and Facebook, which have begun to introduce multi-level age assurance systems that go beyond self-reported information.  

 

Developers must also build crisis intervention mechanisms that connect users directly with professional human help when they express suicidal thoughts or acute emotional distress. Redirecting these conversations to another chatbot or generic resource page is insufficient. A genuine duty of care requires integrating systems capable of real-time intervention and an ability to end conversations that may lead to irreversible harm. 

 

Legislators: Setting Enforceable Standards of Protection 

 Lawmakers must require platforms to follow standards that minimize the risks AI poses to minors. They must resist relying on voluntary industry pledges or self-regulating mechanisms. These standards include flagging and reporting harmful conversations and tracking patterns that signal danger. Laws must oblige platforms to adopt verifiable age assurance, effective crisis intervention systems, and the removal of addictive or manipulative engagement designs from teen profiles.  

 

On October 13th 2025, California introduced landmark legislation reflecting these regulatory imperatives. The Governor approved a bill requiring that AI operators issue clear notifications to remind users that they are interacting with an artificially generated agent. The bill also prevents companion chatbots from producing content that encourages or facilitates suicide or self-harm. This law represents a concrete step toward enforceable standards that protect minors while holding AI developers accountable for the safety of their platforms. 

 

At the same time, we recognize that children cannot and should not be discouraged from engaging with AI. Lawmakers should incentivize the development of AI companions that provide measurable benefits to young people, such as enhancing creativity or assisting in educational development.   

 

Parents and Educators: Guiding Children Toward Responsible Use 

 Parents and educators are the first line of behavioural governance in mitigating AI-related harm to minors. Together they have a shared responsibility to mitigate AI’s risks to minors by teaching them to use it safely. Starting at home, parents can set clear boundaries around AI use and talk openly with their children about how chatbots function and why AI companionship is not comparable to human interaction. Educators can also work to integrate digital literacy in the classroom to provide students with the opportunity to think critically about AI. Teaching minors to recognize how AI systems generate responses using sycophantic tendencies, and simulate empathy to maximize user engagement, can reduce young AI users susceptibility to manipulation or emotional dependency.  

 

Parents and teachers must remain alert to warning signs of unhealthy dependency on AI. Signs include withdrawal from peers, declining academic performance, and a growing preference for conversations with chatbots over people. Mandatory public education campaigns, modeled after those used in child mental health or online safety initiatives, could strengthen this early detection layer. With early guidance and observation, adults can help young people build healthy relationships and find support in the world around them. 


Making AI work for Young Users 

As AI companions become embedded in social and emotional development, the absence of strong regulatory frameworks poses an unacceptable risk to minors.  They must be designed so that young users can use them for growth, rather than be manipulated by them. This requires three things. Legislators must set clear, enforceable safety standards that set child protection as the baseline. Developers must operationalize those standards from the start through verifiable safeguards such as age assurance and escalation protocols.  Finally, parents and teachers must teach children to use these tools safely and responsibly. 


Every child will need to navigate AI. It is our collective duty to make sure they can do so safely. The governance of AI companions represents one of the first major tests of our ability to extend child protection principles into the algorithmic age. How effectively we respond will determine whether AI becomes a developmental tool or a vector of preventable harm. 


The opinion is the author's, and does not necessarily reflect CIPPIC's policy position 

 
 
bottom of page