Artificial Intelligence (AI) and Machine Learning (ML)

Does AI Belong in Canada’s Courtrooms? The Legal Sector Faces a Crucial Crossroads

The Case That Sparked a National Conversation

In December 2023, a British Columbia courtroom faced a startling revelation: two legal citations submitted in a child custody case were entirely fabricated—generated by ChatGPT. The case, Zhang v. Chen, drew national attention when opposing counsel, Fraser MacLean, acknowledged trusting the AI-generated case law without verifying the sources.

The incident led to a Law Society of British Columbia (LSBC) investigation, a judicial reprimand, and a wider debate about the reliability and oversight of AI in Canadian courtrooms.


A Growing Pattern: AI Missteps Across Canada’s Legal System

Since Zhang v. Chen, similar issues have surfaced across the country, including:

  • B.C. Human Rights Tribunal
  • Canadian Intellectual Property Office – Trademarks Opposition Board
  • B.C. Civil Resolution Tribunal

While some provinces like Alberta and Quebec require human-in-the-loop confirmation of AI-generated content, most Canadian courts lack enforceable disclosure policies. A Federal Court notice in December 2023 advised litigants to label AI-generated content—but out of over 250,000 filings in 2024, only a handful included such disclosures.

This inconsistency raises concerns about transparency, due diligence, and the potential for AI-generated misinformation to enter the judicial process unnoticed.


Experts Warn: Automation Bias, Privacy, and Deepfakes

AI can streamline legal tasks—but without proper safeguards, it can introduce new threats.

  • Automation Bias: Lawyers may place undue trust in AI outputs without verification, undermining accuracy and professional responsibility.
  • Privacy Concerns: AI-generated documents risk accidental exposure of solicitor-client privilege if content is mishandled.
  • Deepfakes in Court: Digital forgeries could compromise the authenticity of evidence and testimony.

As Professor Katie Szilagyi of the University of Manitoba warns, “Verifying and disclosing AI use is essential to protect client confidentiality and maintain the credibility of the legal process.”

UBC’s Benjamin Perrin echoes this caution, noting that AI could expand access but also amplify systemic flaws if used recklessly. His top concern: the rising threat of deepfakes in the courtroom.


Courts Are Cautious—But Are Lawyers Listening?

While Canada’s courts advocate for transparency and human oversight, law firms are adopting AI with minimal regulation. Professor Szilagyi reports that AI is regularly used for legal memos and research—often without disclosure.

Interestingly, self-represented litigants appear more open about their AI use than law firms. This contrast is prompting action from regulators: Ontario is drafting policies that may soon require:

  • Mandatory AI disclosure
  • Verification of AI-generated legal content
  • Auditable documentation of AI use in filings

The Canadian Judicial Council and Chief Justice Richard Wagner have emphasized that while AI can assist, judges must remain the final decision-makers, and courtroom use of AI must stay within ethical and legal boundaries.


The Verdict: AI Requires Oversight—Not a Ban

Legal professionals largely agree: AI shouldn’t be banned, but it must be regulated. Fraser MacLean himself now advocates for:

  • Clear usage policies
  • Lawyer training on AI risks
  • Strict content verification protocols

As Justice David Masuhara—who presided over Zhang v. Chen—put it:

“Filing fraudulent citations in court is abuse of process. Left unchecked, this will lead to a miscarriage of justice.”


Gavel on wooden table and Lawyer or Judge working with agreement in Courtroom theme, Justice and Law concept.

Responsible AI Use Starts with the Right Tools

The risks of using open AI platforms without legal vetting are clear. That’s why platforms like Case Polaris are critical. With features like:

Case Polaris provides legally vetted, secure, and privacy-focused AI tools—making it a trusted option for firms seeking efficiency without compromising integrity.

As Canada faces a crucial turning point in legal tech, the legal profession must balance innovation with responsibility. With the right tools, policies, and human oversight, AI can support—not threaten—justice.


📚 Source:
The Canadian Press, March 29, 2025

Leave a Reply

Your email address will not be published. Required fields are marked *