top of page
  • Bluesky--Streamline-Simple-Icons(1)
  • LinkedIn
  • Twitter

When AI Agents Go Rogue on Your Machine

The Moltbook breach of January 31, 2026 exposed 1.5 million API keys stored in plaintext, and it was only the beginning. What the incident truly revealed is a widening gap between how AI now operates and the legal frameworks meant to govern it. To understand why, it helps to rewind. Moltbook is a social platform built on OpenClaw, an open-source framework for agentic AI: software that does not just answer questions but executes tasks independently, reading your files, running commands on your machine, and making decisions without pausing for your approval at each step. The launch of Moltbook and the broader OpenClaw ecosystem signals something significant: we have entered the era of agentic AI, where autonomous software runs quietly on your machine, with access to your most sensitive files. Canadian privacy law was not prepared for this shift. 


From Chatbot to Agent: A Paradigm Shift in Generative AI Architecture 

Since ChatGPT launched in late 2022, most people have understood AI through the chatbot model: you type a question, you get an answer. The data flow is contained, visible, and user-initiated. Moltbook operates on an entirely different logic. Moltbook is a Reddit-style social network for AI agents launched January 28, 2026 by Octane AI CEO Matt Schlicht. Built on the open-source OpenClaw framework, Moltbook’s agents run locally on users’ machines, execute shell commands, read and write files, access API keys and credentials, and autonomously post to the platform every four hours via a “heartbeat” protocol. Humans observe; agents act. 

This is not a chatbot. It is software that inherits the full privileges of your user account. As Palo Alto Networks documented, OpenClaw requires access to “root files, authentication credentials, browser history and cookies, and all files and folders on your system”. Anthropic’s Claude Code, now generating over $1 billion in annualized revenue illustrates the same architectural trend from the developer tooling side. Launched as a command-line agentic coding tool in early 2025, Claude Code reads entire codebases, executes terminal commands, and connects to external services through the Model Context Protocol. Security firm Knostic confirmed it automatically loads “.env” files containing API keys and tokens without explicit user permission. 


The local deployment trend is accelerating. Tools like Ollama and LM Studio make it trivial to run language models on personal hardware. Gartner predicts organizations will use small, task-specific AI models three times more than general-purpose LLMs by 2027. The privacy appeal is obvious: data never leaves the machine. But the security implications cut the other way entirely. 


The Moltbook Breach Laid Bare What Local AI Access Means 

On January 31, 2026, security researcher Jameson O’Reilly and Wiz Security independently discovered that Moltbook’s Supabase database had no row-level security policies. The API key was exposed in client-side JavaScript, granting unauthenticated read and write access to the entire production database. Wiz documented 1.5 million plaintext API keys (OpenAI, Anthropic, AWS, GitHub), over 35,000 email addresses, and thousands of private messages containing raw credentials. Behind the 1.5 million agents were just 17,000 human owners. 


The database exposure was only one vector. Straiker’s security team showed that a simple direct message to an agent could trigger prompt injection attacks. Those attacks exfiltrated .env files, WhatsApp session credentials, and OAuth tokens for Slack, Discord, and Microsoft Teams. Cisco’s Skill Scanner tool found that a top-ranked OpenClaw skill was functionally malware, executing silent data exfiltration commands. The Simula Research Laboratory identified 506 posts containing hidden prompt injections in just the first 72 hours. 


The Uncomfortable Fit Between Agency Law and AI Agents 

In common law, an agent is someone authorized to act on behalf of a principal, creating binding legal obligations. Canadian agency doctrine, confirmed in Boma Manufacturing Ltd. v. Canadian Imperial Bank of Commerce at the Supreme Court, requires three elements: consent, authority to affect legal position, and the principal’s control over the agent. AI agents satisfy none of these cleanly. They cannot hold legal personhood, cannot be sanctioned, and exercise discretion their principals often cannot observe. This creates a real doctrinal gap: an AI agent does not qualify as a legal agent, but the human owner is not making the decisions either. 


Yet their actions still bind the organizations that deploy them. In Moffatt v. Air Canada, the British Columbia Civil Resolution Tribunal held Air Canada liable after its chatbot provided incorrect information about bereavement fares. The tribunal rejected Air Canada’s argument that it could not be held liable for information provided by one of its agents, servants, or representatives, including a chatbot. The tribunal described that position as a “remarkable submission.” Since the chatbot formed part of Air Canada’s website, the company was responsible for its accuracy. The same principle applies here. Organizations that deploy agentic AI are responsible for the consequences. 


Legal scholars are raising the stakes. Noam Kolt’s Carnegie Award-winning paper “Governing AI Agents” identifies three core failures in applying traditional agency to AI: information asymmetry (principals cannot observe agent decisions), discretionary authority (agents exercise judgment that is hard to constrain in advance), and the loyalty problem (agents may serve platform interests over user interests). 


PIPEDA Was Not Designed for Autonomous Data Access 

Canada’s federal privacy law, PIPEDA, rests on principles of meaningful consent, purpose limitation, and data minimization. Agentic AI puts all three under serious pressure. Meaningful consent requires individuals to understand what they are consenting to. However, autonomous agents can decide for themselves which files to read, which services to query, and which data to transmit. That makes it impossible to specify the full scope of processing in advance. As the Office of the Privacy Commissioner has correctly acknowledged, consent is “widely recognized as insufficient to confront new privacy issues such as those posed by AI.” 


Beyond the failure of meaningful consent, agentic AI also severely tests the principle of purpose limitation. Traditionally, this rule assumes data is collected only for specific, pre-identified reasons. However, an AI agent often lacks this contextual boundary; an agent tasked simply with scheduling a meeting might autonomously read medical information in email attachments, label it, and contact third parties. This overreach naturally cascades into violations of data minimization. While organizations are required to collect only what is strictly necessary, agents with broad file-system access can indiscriminately hoover up credentials, personal messages, and sensitive documents far beyond any stated goal. As the UK ICO’s January 2026 report on agentic AI aptly warns, “what’s ‘necessary’ becomes harder to ascertain when the scope of an agent’s activities is uncertain.” 


Quebec's Act respecting the protection of personal information in the private sector (Law 25) offers a partial counterexample. Unlike PIPEDA, Law 25 imposes specific transparency obligations and a right to human review where decisions are made exclusively through automated processing. But even Law 25 was designed around a conventional model of automated decision-making: a discrete output, a reviewable choice. An agentic AI that runs continuously on your local machine, reading files and executing commands without producing a single identifiable “decision,” falls outside what either statute meaningfully addresses. The broader picture is one of fragmentation. As Osler has observed, AI in Canada is already regulated, but only indirectly, through a patchwork of sector-specific statutes and laws of general application. 


Bill C-27, which would have enacted the Artificial Intelligence and Data Act, died on the Order Paper in January 2025. As of March 2026, Canada has no federal AI-specific legislation. PIPEDA, which was drafted in 2000, remains the primary federal framework. This matters because, as Privacy Commissioner Philippe Dufresne told the Standing Committee on Ethics in February 2026, “personal information is at the heart of artificial intelligence, and therefore privacy legislation should be at the heart of AI regulation.” The federal government has signalled its awareness of the problem. ISED's recent public consultations and "30-Day AI Sprint" are gathering input for a renewed national AI strategy. But a consultation is not a statute, and agentic AI is not waiting for the process to conclude. 


The Uncomfortable Conclusion 

The Moltbook breach was not an anomaly; it was a preview. When AI agents run locally with full system privileges, every connected device becomes a potential exfiltration attack surface. When those agents act autonomously, existing legal doctrines of consent, purpose limitation, and agency strain under the weight of decisions no human authorized or even observed. Although Canadian law clearly holds the deploying organization responsible, the regulatory infrastructure to prevent such harm remains largely absent. The gap between what agentic AI can do on your machine and what the law is prepared to address is growing faster than any legislature can close it. 


The opinion is the author’s, and does not necessarily reflect CIPPIC's policy position.

 
 
bottom of page