top of page
  • Bluesky--Streamline-Simple-Icons(1)
  • LinkedIn
  • Twitter

When AI Kills: Analyzing Corporate Criminal Liability in Canada

I. Background 

The film Subservience demonstrates a worst-case AI safety scenario (warning: spoilers ahead!). In the film, “Alice” is a service android animated by AI.  

  • Initial Programming: Alice’s makers program Alice to care for a family. 

  • Emergent Behaviour: Alice develops unpredictable self-awareness and an obsession with Nick, the father in the family. 

  • The Act: Alice deviates from her code and attempts to kill Nick's wife, Maggie. 

While this is fiction, it raises a real legal question: if an AI is responsible for the death or injury of a human, could the company behind that AI be held criminally liable? 


II. The Current Framework: The Criminal Code of Canada 

Section 2 of the Criminal Code of Canada broadly states that a “person” or “organization” can be held criminally liablefor its actions. An AI is neither: it lacks legal personhood. Accordingly, authorities cannot charge an AI with a crime. An AI cannot stand trial or be sentenced to imprisonment. In the eyes of the law, Alice is merely a tool.  


In 2004, Bill C-45, An Act to amend the Criminal Code (criminal liability of organizations), f reshaped Canadian criminal law by setting out when an “organization” can be held criminally responsible for the acts and supervision of its representatives. In R. v. Metron Construction Corp., 2013 ONCA 541, the Court stated that the intent of Bill C-45 was to trigger responsibility by a corporation for the conduct and supervision of its representative (According to the Criminal Code, “representative” means a “director, partner, employee, member, agent or contractor of the organization”). The Criminal Code draws a sharp line on when a representative’s conduct is attributed to the organization, distinguishing between negligence-based offences (S. 22.1) from all other offences (S. 22.2): negligence turns on a marked departure by the responsible senior officer(s) from the standard of care, while other offences require fault by a senior officer acting, at least in part, to benefit the organization through participation, direction, or a failure to stop the offence. According to researcher Reem Radhi, through this approach, corporations are unable to evade liability simply by delegating duties to lower level managers, and would not be held responsible when low level employees commit criminal acts that the corporation did not authorize and took reasonable steps to avoid. 

Under the current Criminal Code, Alice would be treated as a tool, not as a legal entity. The company behind Alice, however, could still be held legally responsible for what she does. 


III. The Current Case Law 

While Canadian courts have yet to provide a definitive ruling specifically on the intersection of AI and criminal liability of organizations, we can look to broader North American jurisprudence and legal principles for persuasive guidance. 


In Moffatt v. Air Canada 2024 BCCRT 149, the court dismissed Air Canada’s defence that a chatbot is a separate legal entity responsible for its own actions. The judge clarified that “While a chatbot has an interactive component, it is still just a part of Air Canada’s website. It should be obvious to Air Canada that it is responsible for all the information on its website. It makes no difference whether the information comes from a static page or a chatbot.”  While Moffatt is a civil case, it establishes a critical principle: companies cannot hide behind their algorithms to escape accountability. Applying this to AI: Since AI is not a legal subject in Canada, it could not take responsibility by itself. The law would seem to attribute liability for AI-driven harm to the legal entities behind the technology, such as the company that made or uses it. 


In R. v. Detour Gold Corporation, 2017 ONCJ 954, an employee of Detour Gold, a mining company, died fromexposure to sodium cyanide that leaked during ongoing repairs. The corporation engaged in acts of omission by failing to properly focus on and act on risks involving cyanide in a non-gaseous form, a substance it knew was pivotal to the operation of the ILR. And if the company had dealt with the risk, the employee’s death would have been avoidable. The company was charged with criminal negligence causing death, and it pleaded guilty to this charge. By the same logic, a company may also face criminal charges where it demonstrates a “wanton or reckless disregard” for the lives of others in the negligent design, development, or deployment of AI systems.  


In R. v. Metron Construction Corp., 2013 ONCA 541, the accused corporation pleaded guilty to criminal negligence causing death after a swing stage at a construction site collapsed when overloaded with people and equipment. The criminal charge stood for the proposition that the corporation should take responsibility for the conduct and supervision of its representative. By analogy, the organization behind an AI program may also take on this kind of risk. 


In State of Arizona V. Vasquez, a case that happened in the U.S, an Uber self-driving car test driver pleaded guilty to endangerment in a pedestrian death case. The finalization of these criminal proceedings meant that the court would no longer scrutinize Uber’s institutional contribution to the accident. However, according to Helen Stamp, the collision points to a prima facie case that Uber, as a corporation, was reckless in how it deployed its autonomous vehicle testing program on the public roads of Arizona and that this was tolerated by Uber’s high managerial agents. This reflects a general reluctance in the U.S to prosecute corporations for fear of hindering technological innovation and economic growth. Extending this logic to the Canadian context, Crown Attorneys could similarly prioritize economic policy and industrial competitiveness over aggressive corporate prosecution. 

From these cases, the following conclusions can be drawn: 

  1. AI as a Tool, not a Legal Person: Current legal precedents do not recognize AI as a separate legal entity. AI is treated as a tool, meaning that if an AI causes a death, the responsibility falls on the legal entities controlling it. 

  2. Corporate Criminal Liability: Drawing an analogy from workplace safety cases, a corporation may be held criminally liable if it exhibits the requisite mens rea (criminal intent or negligence) in instances where its AI causes death. 

  3. Influence of Economic and Innovation Policy: In determining whether to prosecute a corporation, authorities may weigh criminal liability against broader economic policies.  


IV. The Legislative Frontier: Bill C-27 

In June 2022, the Government of Canada tabled the Artificial Intelligence and Data Act (AIDA) as part of Bill C-27, the Digital Charter Implementation Act, 2022. While Bill C-27 died on the Order Paper in January 2025, the AIDA framework remains a pivotal reference point for Canadian AI regulation until new legislation is proposed. This bill attempted to address the uncertainties of traditional criminal law in dealing with “high-impact AI.” AIDA created three new criminal offences to directly prohibit and address specific behaviours of concern, and included one offence potentially applicable to our hypothetical case: making an AI system available for use, knowing, or being reckless as to whether it is likely to cause serious harm or substantial damage to property, where its use actually causes such harm or damage. This crime could be investigated by law enforcement and prosecuted at the discretion of the Public Prosecution Service of Canada. If these core elements are incorporated into future bills, they would have a profound impact on the landscape of AI regulation in Canada. 


V. Summary 

In the world of Subservience, Alice may feel like a person, but under Canadian criminal law, she is just a product. The Criminal Code will treat her as a tool, and it will ask whether the humans and the organization behind her meet the fault rules for organizations under ss. 22.1 and 22.2. If Alice kills someone, corporate criminal liability will turn on what the company’s senior officers did or failed to do: did they markedly depart from the standard of care in how they designed, tested, supervised, or deployed the system, or did a senior officer direct, participate in, or fail to stop wrongdoing intended at least in part to benefit the organization. That framework makes “the AI did it” a dead end as a defence, but it does not make every AI-caused death a corporate crime.  

A future AIDA-style offence would narrow the gap by targeting the decision to make high-risk systems available where serious harm is foreseeable, and the system in fact causes that harm. This approach to corporate liability would make criminal enforcement possible for cases like Alice’s. While such an approach would make criminal enforcement theoretically possible for cases like Alice’s, the Crown may still weigh broader economic policies and remain reluctant to charge a corporation if such a prosecution is perceived as a deterrent to national technological innovation.  

 

The opinion is the author’s, and does not necessarily reflect CIPPIC's policy position

 
 
bottom of page