top of page
  • Bluesky--Streamline-Simple-Icons(1)
  • LinkedIn
  • Twitter

When AI “mocks” your favourite singer’s voice: what are the potential legal risks?

Do you have a favourite singer? 


Maybe you’ve daydreamed about what it would sound like if they sang your favourite song — even one they’ve never recorded. 

 

Has Taylor Swift ever sung Call Me Maybe? Officially, no. 


But a quick search on YouTube might make you hesitate. With AI-generated voice technology, you can now hear something eerily convincing: “Taylor Swift” singing a song she never recorded. In this video , the voice sounds familiar. The style feels right. And yet — it isn’t really her. 


Fun? Absolutely. 


Harmless? That’s where things get complicated. 


AI-generated voices have moved from novelty to controversy. In 2023, an AI-generated track mimicking Drake and The Weeknd surfaced online and was even reportedly submitted for consideration at the Grammys, sparking debate over authenticity and rights. Around the same time, users criticized OpenAI after noting that its ChatGPT ‘Sky’ voice sounded strikingly similar to Scarlett Johansson’s. Johansson later spoke out publicly that the voice is so similar to hers that her closest friends could not tell the difference, and she had declined OpenAI’s earlier request to voice the assistant. These episodes highlight how AI voices can blur the line between imitation and legal risk— particularly in the U.S., where voice owners may frame such uses as violations of the right of publicity


Under Canadian law, individual users may face great legal risk when using AI voice-cloning tools. When users upload audio, train a mock voice, and publish the output themselves, potential liability is most likely to arise in three areas: copyright, passing off, and misappropriation of personality. 

 

Copyright: Works Are Protected — Voices Are Not 

From a copyright perspective, legal risk arises when users upload copyrighted recordings without permission, or when AI-generated output becomes substantially similar to a specific existing sound recording. Training a voice model sometimes involves copying audio material, such as songs, interviews, or sound recordings. The Copyright Act s. 3(1) provides that reproducing a protected work without permission may infringe the copyright holder’s exclusive right of reproduction. 


In practice, users can reduce this risk in several ways. One common path is relying on licences — for example, using recordings they own, materials expressly licensed for reuse, or datasets made available under open licences. Works in the public domain may also be used freely, provided no separate rights subsist in the particular recording. In limited circumstances, fair dealing may apply, particularly for purposes such as research, education, parody, or satire. 


Importantly, Canadian law does not recognize a general “voice ownership” or right of publicity that would automatically prevent a person’s voice from being used as training material. As a result, AI trainers are unlikely to face liability simply for teaching a model how voices sound. Nevertheless, the law becomes far more concerned when AI-generated voices are released into the public sphere and begin to resemble real people too closely. 

 

Passing Off: A Difficult Claim, but Not Impossible 

Passing off, a form of unfair competition under Canadian common law, is clarified by the Supreme Court of Canada in Kirkbi AG v. Ritvik Holdings Inc. The Court set out a three-part test: 

(1) the claimant must possess goodwill, 

(2) the defendant must make a misrepresentation likely to cause public confusion, and 

(3) the claimant must suffer damage as a result. 

 

Passing off does not punish imitation itself. It targets confusion in the marketplace. For an individual user, liability is unlikely unless their AI-generated content leads the public to believe that a real singer is involved, has endorsed the work, or actually performed it. 

 

This makes passing off a relatively high bar in many AI voice scenarios — especially where users clearly label content as AI-generated. That said, disclaimers are not foolproof. Even if a user labels content as “AI-generated,” legal risk may remain where the overall presentation still relies on a real singer’s reputation or appeal to attract attention or value. 

 

Misappropriation of Personality: When AI Voices Become Commercial Assets 

Using a real person’s identity for commercial gain can trigger liability for misappropriation of personality under Canadian law. This liability arises when someone uses a person’s “name, likeness, or voice for the purpose of advertising or promoting goods or services, or otherwise for the user’s own gain, without the person’s consent,” as summarized by privacy law scholar Barbara von Tigerstrom, drawing on the British Columbia Court of Appeal’s decision in RateMDs Inc. v Bleuler. 


Canada regulates personality exploitation through a mix of statutes and limited common law doctrines. Manitoba, Saskatchewan, and Newfoundland and Labrador treat unauthorized use of a person’s name or likeness as a form of privacy invasion. For instance, the Privacy Act of Manitoba regulates this issue under section3(c), as an examples of violation of privacy. Meanwhile, British Columbia creates a separate statutory tort of misappropriation of personality focused on commercial use. Courts acknowledge some overlap between these statutory regimes and the common law, but their relationship remains unsettled.  


The Common law tort for misappropriation of personality rests on a narrow rationale developed in Krouse and Athans. Courts intervene only when a defendant uses a recognizable aspect of a person’s identity as a commercial asset (Athans). The law does not prohibit discussion, depiction, or commentary about public figures. It targets situations where someone takes what the individual could have licensed and uses it for private gain.(RateMDs Inc. v Bleuler at 112.) 



AI-mocked voices may fit within the Athans framework. When users deploy AI to generate a voice that clearly evokes a specific singer and use it to drive traffic or revenue, they risk appropriating a marketable aspect of that singer’s persona. Even clear “AI-generated” labels do not resolve the issue.  

 

What other legal tools might address AI-mocked voices? Recent developments in the United States point to emerging, experimental responses. 

In 2026, Matthew McConaughey reportedly turned to trademark law to protect elements of his voice, likeness, and signature expressions, aiming to create a federal enforcement pathway against AI misuse in the United States. This strategy still remains largely untested and uncertain. Whether trademark law can meaningfully protect voices, and whether similar approaches could translate into the Canadian legal system, remains an open question. For now, these efforts illustrate creative lawyering at the edges of the law, and we will have to see how courts respond. 

 

In conclusion, as AI tools advance faster than legal frameworks, courts and lawmakers must rethink how copyright rules, passing-off doctrines, and misappropriation of personality torts apply in practice. When machines can convincingly sing, speak, or perform like real people, the deeper challenge lies in balancing an open public domain that supports creativity and experimentation, while protecting singers, actors, and other public personalities whose voices, skills, and labour made that creativity possible in the first place. 

 

The opinion is the author's, and does not necessarily reflect CIPPIC's policy position.  

 
 
bottom of page