Artificial Intelligence (AI) and Machine Learning (ML)

AI Hallucinations in Canadian Courtrooms: A Wake-Up Call for the Legal Profession

When AI Gets It Wrong: The Case That Shook Canadian Courts

The December 2023 ruling in Zhang v. Chen marked a turning point in the conversation around AI hallucinations in legal cases. In this family law dispute, British Columbia lawyer Russell MacLean discovered that citations submitted by opposing counsel were entirely fabricated—generated by ChatGPT.

These hallucinated case references seemed legitimate at first glance, but none existed. The incident led to judicial admonishment, a Law Society inquiry, and rising concern about the reliability of generative AI in Canadian courts.

Worryingly, this was not an isolated event. Courts across Canada—including human rights tribunals and small claims courts—have reported similar instances. The question now looms: How can legal professionals trust AI outputs? And what safeguards are necessary?

The Legal System Responds: Guidelines, Declarations, and Doubts

In the aftermath of Zhang v. Chen, legal regulators began issuing guidance to rein in improper AI usage. Provinces like Alberta and Quebec now mandate a human in the loop principle, requiring legal professionals to personally verify any content generated by AI.

At the federal level, courts now expect legal AI declarations in case filings. A 2023 notice by the Federal Court mandated that any AI-assisted drafting must be disclosed, and emphasized that no AI tool may independently make judicial decisions.

Despite these new expectations, compliance appears inconsistent. With thousands of cases processed in 2024, only a few included AI-related disclosures. Experts like Daniel Escott warn this may indicate that some legal professionals ignore disclosure protocols, while self-represented litigants are ironically more honest about their AI use.

University class in a circular classroom with students of different origins and the teacher in the middle

Risks vs. Rewards: Navigating the AI Landscape in Law

There’s no denying the efficiency gains AI can offer—from auto-generating legal memos and templates to performing time-intensive document reviews. However, these benefits come with serious trade-offs. The risks of fake case law from ChatGPT, bias, and unverifiable content threaten both accuracy and trust.

Legal scholars like Katie Szilagyi and Benjamin Perrin highlight that AI’s illusion of accuracy—known as automation bias—can dangerously mislead professionals. Without robust checks, AI tools may undermine solicitor-client privilege or enable misuse through deepfake legal evidence.

As AI capabilities expand, maintaining legal integrity becomes harder. What once relied on chain-of-custody rules now hinges on detecting manipulation in digital evidence—an evolving challenge in the age of AI.

The Path Forward: Responsible AI Use in Canadian Law

Ontario Court of Appeal Justice Peter Lauwers stressed that AI is not yet ready for unsupervised courtroom use. He advocates for transparency—knowing what AI tools were used and validating every result manually.

Chief Justice Richard Wagner echoed this caution, saying that while AI can assist judicial functions, its role must remain supportive and its outputs clearly explainable. The Canadian Bar Association agrees, encouraging firms to treat AI as a tool—not a crutch—and to always maintain oversight when it comes to AI legal ethics.

Russell MacLean, the lawyer in the Zhang v. Chen case, recommends firm-level AI policies, legal staff training on hallucination detection, and maintaining human oversight throughout. As Justice Masuhara concluded: introducing false cases into court documents amounts to falsification and risks a miscarriage of justice.

Legal counsel presents to the client a signed contract with gavel and legal law. justice and lawyer concept

Case Polaris: Canada’s Secure Answer to AI in Law

In response to growing concerns around trust, ethics, and AI risks in law, Case Polaris emerges as a privacy-first solution for Canadian legal professionals. Developed by Data Function Inc., this Ontario-based platform was designed to avoid exactly these issues: no hallucinations, no untraceable citations.

Case Polaris combines human-in-the-loop safeguards, encrypted document handling, and citation-backed legal summaries drawn directly from CanLII. Its intelligent filters adjust results based on practice area, jurisdiction, or even judge—delivering responsible, context-aware outputs.

Built for Canadian firms and compliant with domestic privacy laws, Case Polaris enables AI-powered research that’s transparent, secure, and always explainable. With growing adoption across Ontario and a national SaaS launch underway, Case Polaris helps law firms embrace AI—ethically, intelligently, and with confidence.

Source: The Canadian Press 

Leave a Reply

Your email address will not be published. Required fields are marked *