Article, Personal Injury Law

Are AI Chats About Your Lawsuit Privileged in Canada?

Author(s): Robert M. Ben*

March 3, 2026


Are AI Chats About Your Lawsuit Priviledged in Canada - Robert Ben - Thomson Rogers LLP

Sometimes legal advice can be confusing and not what a client necessarily wants to hear. Some clients may be tempted to turn to generative artificial intelligence (AI) like ChatGPT for a “quick second opinion.” They may also want to paste information, medical or financial details, legal questions, and even snippets of legal advice from their lawyer into AI tools to see what comes back. The problem is that, in Canada, these AI interactions may not be covered by legal privilege and may be discoverable by the opposing party in the lawsuit.

What is Discoverable In A Civil Lawsuit?

Ontario’s civil discovery framework requires each party to a lawsuit to disclose and produce to the opposing party all relevant and non-privileged documents in their possession, control, or power, so that cases are decided on their merits. The civil discovery rules already treat electronic records (emails, texts, social media, cloud files, etc.) as documents. AI prompts and outputs would potentially fit into that category as well. That means AI chat logs are potentially producible in a lawsuit if relevant and not privileged.

Legal Privilege Explained

In Canada, communications between a person and their lawyer, or materials created for the dominant purpose of litigation, are privileged, meaning they are shielded from disclosure to the opposing party in a lawsuit. These privileges are called the lawyer-client privilege and the litigation (or lawyer work product) privilege, respectively. These privileges are important because they allow people to speak openly with their lawyer and prepare their cases without fear that those communications will later be used against them.

Why Lawyer-Client Privilege Does Not Protect AI Chats

Lawyer-client privilege protects confidential communications between a lawyer and client made for the purpose of seeking or giving legal advice. Even if AI generates what may sound like legal advice, AI is obviously not a lawyer, nor could it ever be considered as such. Moreover, many consumer AI tools operate under terms that permit their retention, internal use, and sharing of information in some circumstances, all of which undermine the confidentiality necessary to establish privilege.

Waiving Lawyer-Client Privilege By Using AI

The single biggest danger is a client pasting actual legal advice or communications from their lawyer into an AI chat. This would be considered voluntary disclosure of privileged material to a third party and likely amounts to waiver of privilege. Privilege, after all, depends on an intention to keep the communications confidential. Intentionally sharing privileged information with AI tools may destroy that confidentiality.

Waiving Litigation Privilege By Using AI

Litigation privilege protects materials created for the dominant purpose of existing or reasonably anticipated litigation. This privilege protects the litigation process, not the lawyer-client relationship. It can apply to materials prepared by a client or their lawyer. It can apply to materials prepared before a lawyer is retained. An example would be making notes, preparing summaries, or chronologies for the dominant purpose of reasonably anticipated litigation, or searching Google to read statutes, legal commentary, or other information to educate yourself generally about legal rights.

However, it is dangerous to assume that just because you are planning to retain or are already represented by a lawyer, that any documents you generate with the help of AI are privileged. If you use a consumer AI tool to analyze your case, test arguments, or draft a narrative, the resulting AI output may not be privileged.

Although Canadian courts have not yet ruled on this issue, a recent U.S. decision serves as a cautionary example. In United States v. Heppner, a man was charged with securities and wire fraud. On his own initiative, he used the consumer AI tool Claude to generate documents outlining possible defence strategies and anticipated legal arguments, which he later shared with his lawyer. The FBI seized the documents during a search. His lawyer asserted privilege. The New York District Court held that the AI-generated materials were not protected by either the lawyer-client privilege or the litigation (lawyer work product) privilege.

The New York District Court held that the lawyer-client privilege did not apply to the AI-generated materials. AI is not a lawyer, cannot form a lawyer‑client relationship, and cannot provide legal advice (in fact, many AI tools expressly disclaim that they provide legal advice). Communicating with AI is therefore legally analogous to discussing a case with a third party or friend, which is not privileged.

The court also rejected the claim of litigation privilege. The accused’s communications with AI were not confidential. The AI tool’s terms provided that a user’s inputs and AI’s outputs would be retained, analyzed, and used for AI training purposes, and subject to possible disclosure to a host of third parties, including governmental regulatory authorities.

The New York District Court also observed that AI documents were prepared on the client’s own initiative, and not at a lawyer’s request, direction, or supervision. The accused’s intent to eventually share the materials with his lawyer was insufficient to attract privilege. The situation in Canada may be different. If litigation is reasonably anticipated and a person on their own initiative prepares documents for the dominant purpose of that litigation, litigation privilege may provide protection, provided those documents are only shared with their lawyer and not a third party. Canadian courts would apply the Canadian privilege doctrine, but the confidentiality and waiver analysis could be similar to the U.S., depending on the AI platform’s terms and the user’s conduct. A client’s use of consumer-grade AI lacking robust contractual and technological protections, such as controlled access, segregation of data, no AI training on inputs, etc., not requested or directed by a lawyer, may lead a court to find waiver of litigation privilege.

What Lawyers Can Do

In many cases, clients feel the need to turn to AI for answers, not because they distrust their lawyer, but because they feel uncertain, anxious, or unable to get timely clarification when questions arise. Lawsuits are stressful, unfamiliar, and often move slowly. When clients are left waiting for answers, it is no surprise that some will look for immediate reassurance or explanation elsewhere, including AI. From a client’s perspective, asking an AI tool a “quick question” may feel harmless, but, as discussed above, can come at a very real legal cost.

Lawyers can reduce the risk of clients resorting to AI by being proactive about communication. Clear explanations, realistic expectations, and timely responses go a long way toward building trust and discouraging clients from seeking outside “second opinions” from tools that do not understand the nuances of their case and may inadvertently expose them to discovery risks. When clients understand that their questions are welcome, they are far more likely to come to their lawyer first. For lawyers, the message is simple: being available, communicative, and clear is not just good client service; it is also risk management.

This is not to say that lawyers cannot or should not use AI where appropriate. If a lawyer uses a secure, enterprise-grade AI tool (designed for organizational use with built-in controls for confidentiality, security, governance, and compliance, as opposed to consumer-grade or public AI tools intended for general use), the argument for privilege may be stronger. On the other hand, a client’s independent use of consumer AI tools without confidentiality controls will look much more like non-confidential personal research and notes rather than lawyer-directed litigation preparation, making claims for privilege more tenuous and uncertain.

What Clients Can Do

For clients, the takeaway is simple: if you have a question about your case, the safest and most effective source of answers is still your lawyer, not an AI chatbot.  If you are unsure whether something is safe to share or ask, especially when it comes to AI, the right move is to ask your lawyer first. That conversation is privileged. An AI chat almost certainly is not. Until Canadian courts address the question directly, here are my recommendations for clients:

  • Above all else, listen to your lawyer’s advice and warnings about AI use.
  • Assume that any client-initiated AI chats on consumer AI tools are not private, confidential, or subject to any form of legal privilege and that they could be producible to an opposing party in a lawsuit if relevant.
  • Never paste your lawyer’s advice, strategy, drafts, or settlement positions into consumer AI tools as that will likely be argued to waive privilege.
  • If you want help organizing thoughts, do it offline (in a Word document, for example) and send it directly to your lawyer and nobody else.

Robert M. Ben is a Personal Injury Lawyer and a Partner at Thomson Rogers LLP. Robert is listed in peer-reviewed publications – Lexpert® and The Best Lawyers™ in Canada and is ranked AV preeminent in Martindale-Hubbell ®. Robert can be reached at 416-868-3168 or by email.

Share this


Related articles:

Stay Informed

Subscribe to receive updates on the latest news from Thomson Rogers LLP as well as invitations to seminars, webinars and more.

Sign up now