By Tendai Keith Guvamombe
The Canadian government is intensifying its efforts to regulate the digital landscape, moving toward stricter oversight of artificial intelligence (AI) chatbots and expansive online safety measures. These initiatives, primarily funneled through Bill C-27 and the Online Harms Act (Bill C-63), seek to balance rapid technological innovation with the protection of citizens. However, as the legislative framework takes shape in 2026, it has ignited a fierce national debate regarding the potential for government overreach and the erosion of fundamental privacy rights.
At the heart of the AI debate is the Artificial Intelligence and Data Act (AIDA), part of Bill C-27, which aims to establish clear rules for “high-impact” AI systems. The act emphasizes human oversight, transparency, and fairness, requiring developers to proactively mitigate risks of bias or discriminatory outcomes.
In May 2026, Canadian privacy regulators concluded a landmark investigation into OpenAI’s ChatGPT, finding that early training methods were non-compliant with federal laws due to the overcollection of personal data without valid consent. While OpenAI has since implemented improved protections, the case has served as a catalyst for officials to demand that AI regulation be firmly anchored in modernized privacy laws.
Parallel to AI oversight, the government has introduced the Online Harms Act (Bill C-63) to combat content such as child exploitation, hate speech, and non-consensual deepfakes. This bill proposes the creation of a Digital Safety Commission with the power to force platforms to remove harmful material within 24 hours.
Advocates argue these measures are essential for a safer internet, particularly for children, but critics warn of significant privacy infringements. Concerns have specifically been raised about provisions that could allow “remote access” to data for compliance monitoring, which many view as a form of state-mediated surveillance.
The most contentious development involves proposals to apply the Online Harms Act directly to AI chatbots. Legal experts warn that treating private chatbot interactions like public social media posts would require dismantling core privacy safeguards.
Unlike social media, which involves public amplification, chatbot prompts are typically one-to-one exchanges. Critics argue that forcing companies to proactively monitor these private conversations for harmful content would effectively turn AI providers into agents of law enforcement, creating a chilling effect on lawful expression and fundamentally altering the expectation of digital privacy in Canada.
