Last Updated:
Intellectual Property Cases
Infringement Cases
Toronto Star Newspapers Ltd et al v OpenAI, Inc et al, Ontario Superior Court of Justice (CV-24-00732231-00CL): A group of Canadian news media publishers including the Toronto Star, Postmedia, The Globe and Mail, the Canadian Press, and the CBC are suing OpenAI for using their content in various models within ChatGPT. The allegations include copyright infringement, circumvention of technological protection measures, breach of terms of use, and unjust enrichment. The publishers are seeking statutory and punitive damages and a permanent injunction. In late 2025, the Ontario Superior Court of Justice dismissed a motion to strike for lack of jurisdiction brought by OpenAI. As of early 2026, this case continues to be pending and is in the discovery phase.
CanLII v Caseway AI, British Columbia Supreme Court (VLC-S-S-247574): On November 4, 2024, CanLII sued Caseway AI for scraping CanLII’s database of cases, legislation, and secondary sources and using it within Caseway’s AI product. CanLII alleged that Caseway bulk downloaded materials in violation of terms of use and is in breach of CanLII’s copyrights. CanLII is alleging breach of contract, copyright infringement, conversion, and unjust enrichment and claiming punitive damages and a permanent injunction. On March 20, 2026, the parties announced a confidential settlement. According to the press release, “CanLII continues its work providing broad public access to primary legal information. Caseway continues developing technology solutions for organizations that operate in complex, document-heavy environments. Each will move forward independently, and both consider the matter fully and finally resolved.”
Authorship / Inventorship Cases
Re Thaler, 2025 CACP 8: In a test case relating to a patent application filed by the computer scientist Stephen Thaler, the Patent Appeal Board held that an AI system is not a valid “inventor” under Canadian patent law – rather, only humans could be “inventors”, based on a proper statutory interpretation of the Patent Act. In the result, the patent filed by Stephen Thaler and invented by his AI system named “DABUS”, was rejected for failing to meet the requirements to name a proper inventor and for lack of entitlement to file a patent given the lack of any assignment from a valid inventor. The PAB decision is not binding law but will likely be followed by patent examiners in Canada. On December 5, 2025, Thaler filed a Notice of Appeal at the Federal Court, alleging that the PAB erred in its interpretation of the Patent Act.
CIPPIC v Sahni, Federal Court File No. T-1717-24: The Samuelson-Glushko Canadian Internet Policy and Public Interest Clinic filed an application to expunge a copyright registration for the painting Suryast, which was permitted to be registered with an AI co-author together with a human co-author. CIPPIC argues that the generative AI tool (RAGHAV AI) used to create Suryast followed only a mechanical, algorithmic process that did not entitle it to copyright protection. As of early 2026, the matter remains ongoing – the Court has asked the parties to file expert evidence to assist in understanding the technical underpinnings of generative AI. This case could address the issue of whether generative AI is capable of exercising the requisite “originality” for copyright protection.
Class Actions
MacKinnon v Meta Platforms / Nvidia Corp / Anthropic / Databricks, British Columbia Supreme Court (VLC-S-S-255562, VLC-S-S-255561, VLC-S-S-253893, and VLC-S-S-252936): In mid-2025, J.B. MacKinnon, the author of 100 Mile Diet and The Once and Future World, commenced four class action lawsuits as a representative plaintiff claiming that large tech companies (Meta, NVidia, Anthropic, and Databricks) used his and other authors’ copyrighted works to train their artificial intelligence models without permission or compensation. The core allegations are similar to corresponding cases in the United States and the United Kingdom: it is alleged that the tech companies downloaded and used pirated copies of books or e-books (such as a dataset called the Pile that contained the allegedly unlicensed Books3 database) to train their models, and/or circumvented various technological protection measures on e-books. The class action also alleges the companies have attempted to cover up their use of such materials and seek punitive damages in addition to compensation.
Doan v Clearview AI Inc, 2025 FCA 133, 2023 FC 1612: The representative plaintiff alleges that her copyrights and moral rights in photographs she took was infringed by Clearview AI, which is alleged to have built a vast database of images by crawling the internet for photographs, which are then run through facial recognition software to provide a service (predominantly to law enforcement and national security agencies) for searching for people by faces. Claims were also advanced relating to breach of privacy laws. In 2023, the Federal Court refused to certify the proposed class action because some of the members of the class could not self-identify to opt-in or out. On July 16, 2025, the Federal Court of Appeal reversed, finding that the certification judge erred in concluding the proposed alternative query methods to identify class members transformed the action into an impermissible opt-in scheme. The FCA directed the certification motion to be redetermined with further consideration of workable methods to identify class members before the opt-out period.
Robillard v Meta Platforms Inc, Superior Court of Quebec, 500-06-001369-251: Anne Robillard, a children’s book author, applied to commence a class action on March 21, 2025 claiming that Meta used pirated books to train its LLMs. Robillard claims statutory damages for copyright infringement in the amount of $20,000 per party per work. As of late 2025, the proceeding is ongoing. The Court granted leave to adduce evidence for the hearing of the certification application.
Robillard v OpenAI, Inc, Superior Court of Quebec, 500-06-001414-255: Anne Robillard, a children’s book author, applied to commence a class action claim on September 4, 2025 claiming that OpenAI used pirated books obtained illicitly on sites such as LibGenesis and via BitTorrent, to train its LLMs. Robillard claims statutory damages for copyright infringement in the amount of $20,000 per party per work. As of March 2026, the proceeding is ongoing and in its early stages.
Jackson v OpenAI GP, LLC, BC Supreme Court, S-246286: On September 11, 2024, Michael Dean Jackson, an artist, sued OpenAI for scraping and using copyrighted digital art to train its DALL-E text-to-image models. It is alleged that the defendants made copies of the artworks for their internal training models, and also produce output that is substantially similar or identical to the original artwork, thereby infringing under the Copyright Act. The plaintiffs claim compensatory damages, profits, punitive damages in the amount of $200 million, and an injunction. As of March 2026, the proceeding is ongoing. OpenAI has brought an application to strike, dismiss or stay the case for lack of jurisdiction in British Columbia, which has not yet been heard.
Product and Tort Liability Cases
Moffatt v Air Canada, 2024 BCCRT 149: A passenger sued Air Canada after he relied on incorrect advice from Air Canada’s AI chatbot, which told him that he could retroactively apply for “bereavement fares”. He later learned from a human agent that Air Canada did not permit retroactive applications, and was denied a refund by Air Canada. The passenger alleged negligence misrepresentation by Air Canada (and its chatbot) for providing wrong information. The passenger sought to hold Air Canada to the representation made by the chatbot to provide a retroactive claim and refund. The BC Civil Resolution Tribunal awarded the passenger’s claim. The Tribunal found that Air Canada owed a duty to the passenger, and did not take reasonable care to ensure that its chatbot was accurate. The existence of other portions of Air Canada’s website providing contradicting information did not trump the chatbot, since a passenger could not know which was inherently more trustworthy. The Tribunal rejected Air Canada’s submission that the chatbot was a separate legal entity responsible for its own actions.
Gebala v OpenAI Foundation, BC Supreme Court, S-261734: On March 9, 2026, victims of the Tumber Ridge Mass Shooting tragedy sued OpenAI based on the shooter’s use of ChatGPT to discuss and plan the shooting. It is alleged that OpenAI had flagged the shooter’s ChatGPT conversations for violation of user policies due to discussion of gun violence, which were then reviewed by OpenAI’s human moderators and other employees. The employees are alleged to have escalated the shooter’s conversations to OpenAI leadership, who took no action other than to ban an account.
Privacy Cases
Clearview AI Inc v BC (Information and Privacy Commissioner), 2026 BCCA 67 and 2024 BCSC 2311: The BC Information and Privacy Commissioner found that Clearview AI contravened BC privacy legislation through its mass scraping of digital photographs in its facial recognition tool. This was a joint finding between the BC, Alberta, Quebec, and Federal privacy commissioners. The report found that the exemption normally applicable to “publicly available” images did not apply and that individual consent was required for Clearview’s use. Clearview argued in BC that the privacy legislation (PIPA) was unconstitutional. The BCSC and BCCA have now twice upheld the BC Privacy Commissioner’s decision and affirmed that it had jurisdiction over Clearview (a US company).
Clearview AI Inc v Alberta (Information and Privacy Commissioner), 2025 ABKB 287: As in the BC case, the Alberta Privacy Commissioner found that Clearview had breached Alberta privacy laws under PIPA Alberta. Clearview challenged the legislation on constitutional grounds and on jurisdiction. The Alberta Court of King’s Bench held that PIPA applied to Clearview but also found that Clearview’s automated scraping technologies may be protected by freedom of expression. Further, the court held that a mandatory consent regime under PIPA Alberta was unconstitutional and that the “publicly available” exemption to consent was too narrow as presently worded. Clearview AI has appealed certain findings of the ABKB, which is pending as of early 2026.
Deepfake Cases
R v MSK, 2026 NSPC 12: An accused in Nova Scotia was partially acquitted of charges under the Criminal Code, s. 162.1(1) relating to publishing intimate images of complainants without consent. The alleged images at issue were sexually explicit “deepfakes” created using generative AI tools based on public (non-intimate) images of the complainants posted on social media. In a judge-alone trial, the judge held that the legal definition of “intimate images” under the Criminal Code, which include the words “visual recording” of a person, could not be interpreted to include deepfakes. The Court conducted a lengthy review of the statutory history, legal precedents, and commentary on the topic. While recognizing the harms caused by deepfakes and the fact that these images were non-consensual, the Court noted various proposed amendments to the Criminal Code (most notably Bill C-16, currently pending) and other provisions (including previously failed or abandoned proposals) would have included protections against deepfakes but the law as it stood at the time of the trial did not. (As of March 2026, Bill C-16, titled Protecting Victims Act, is currently in committee after passing Second Reading.)
Professional Practice Cases
Note: Readers should consult CourtReady.ca, which has compiled a much more comprehensive fictitious legal citation database and has created a tool to spot fake citations.
Iyer v Nazir, 2026 ABCA 92: The applicants relied on several non-existent authorities (real cases that did not have anything to do with the areas of law cited) which appeared to be created by generative AI. The Court cautioned applicants to verify the work but did not impose costs consequences.
Reddy v Saroya, 2025 ABCA 322 and 2026 ABCA 20: The appellant’s counsel filed a factum containing non-existent cases. The lead counsel had hired a contractor, who denied using a large language model. Counsel claimed he did not have time to verify the cases. The Alberta Court of Appeal noted in strong terms that the Practice Notice in Alberta Courts requires a “human in the loop” at all times, and that time to verify cases must be built into case planning by every lawyer no matter how busy. The lawyer whose name appears on the filed document bears ultimate responsibility for the material. The Court found that it was not mere inadvertence, but “serious misconduct” warranting costs payable by counsel personally in the amount of $17,550 plus GST.
Kapahi Real Estate Inc v Elite Real Estate Club of Toronto Inc, 2026 ONSC 1438: Counsel delivered a factum that cited real cases with correct citations, but with fake hallucinated quotations. The Court suspected generative AI was used, but did not specifically make that finding. Counsel denied the use of AI but did not offer a suitable explanation for the Court. The Court referred the decision to the Law Society of Ontario to investigate whether there was unchecked use of AI or deliberate falsification of law.
National Indigenous Fisheries Institute v Canada (Fisheries and Oceans), 2026 FC 382: Counsel filed a factum with undisclosed AI use (contrary to the Federal Court’s Practice Direction regarding the use of AI), and which contained hallucinated cases and other irregularities. Counsel provided an explanation that it was a non-intentional mistake in an apology letter. The Court noted the apology letter but indicated that “there must be consequences to such a serious breach [] of professional conduct”. The Court ordered solicitor-client costs partly payable by the lawyer in his personal capacity.
Arora v Canadian National Railway, 2026 FC 82: A self-represented litigant filed a factum that included erroneous legal propositions and cited hallucinated cases created through generative AI. The passages of law were described as “categorically false”. The litigant did not include the required declaration in his written submissions. The Court noted that the “failure to declare the use of AI and, of even greater importance, his manifest failure to verify the information in his written representations, is not a harmless endeavor. Presenting erroneous AI-generated content to the Court can mislead the Court, waste scarce judicial resources, put a litigant’s case at risk, cause reputational damage to a litigant and lead to sanctions.” The Court ordered the litigant to carefully review the Court’s direction on AI use and to verify any AI-generated information in his record, failing which he faced sanctions for non-compliance.
DJ v SN, 2025 ABCA 383: The appellant referred to three authorities that did not exist. She admitted that these were generated by AI. The Court noted that the Alberta Courts’ Notice to the Public and Legal Profession on use of LLMs applied to both self-represented litigants and to lawyers. The Court awarded $500 in costs based on the specific circumstances of the case but warned that “[s]elf represented litigants can expect more substantial penalties to be imposed in future cases should they fail to comply with the Notice.”
Llyod’s Register Canada Ltd v Munchang Choi, 2025 FC 1233: The self-represented respondent filed a factum that included a hallucinated case citation without declaration of AI use. The respondent claimed that it was a mistake and made submissions for latitude based on his status and due to mental health. The Court found the explanation “unsatisfactory” and “cannot be accepted”. The Court ordered removal of the offending motion record and ordered $500 costs forthwith in any event of the cause.
Ko v Li, 2025 ONSC 2766 and 2025 ONSC 2965: Counsel filed a factum that included non-existent or fake precedent court cases. The Court could not find reference to the cited cases and counsel was “unable to advise whether her factum had been prepared using generative artificial intelligence or whether the cases she listed in her factum and relied upon orally were ‘hallucinations’ fabricated by an AI platform”. The Court ordered counsel to show cause why she should not be held in contempt of court. The Court “must quickly and firmly make clear that, regardless of technology, lawyers cannot rely on non-existent authorities or cases that say the opposite of what is submitted”. The Court noted that the error was not in delegating the factum or using generative AI, but rather arose when counsel signed, delivered, and used the factum without ensuring that the cases were authentic. and supported the legal arguments. Counsel accepted responsibility, explaining that the factum was prepared by staff using ChatGPT. She undertook to complete CPD, legal training, and examined her factum to remove all fake citations. On the basis of the publicity generated by the case and the commitments made by counsel after the Court’s show cause order, the Court declined to make a finding of contempt. Counsel was prohibited from billing the client for the erroneous factum.
Specter Aviation Limited c Laprade, 2025 QCCS 3521: The defendant self-represent litigant made written submissions that included eight alleged occurrences of non-existent citations, irrelevant references or conclusions. When confronted with these issues, the litigant did not retract his arguments. The Court, noting many instances of previous public announcements by judges cautioning that the reliance on AI without checking the work is a serious breach of conduct, regardless of whether it is intentional or a result of negligence. While the Court recognized the nature of the litigant as self-represented, this did not excuse the seriousness of filing submissions to the Court. Citing the need for deterrence, the Court ordered the litigant to pay $5,000 in costs.
Zhang v Chen, 2024 BCSC 285: A lawyer filed an application record containing fictitious cases generated by ChatGPT, which necessitated opposing counsel spending time and effort to hire legal researchers to look for the cases. The Court noted that “Citing fake cases in court filings and other materials handed up to the court is an abuse of process and is tantamount to making a false statement to the court.” However, based on an affidavit filed by counsel, the Court accepted that the lawyer was “naive to the risks” of using AI tools, in this case, ChatGPT. The Court noted the reports to date of errors and hallucinations made by large language models. The Court declined to order special costs, but awarded costs against the lawyer in her personal capacity. Additionally, counsel was required to review all her files before the Court.
Industria de Diseño Textil, SA v Sara Ghassai, 2024 TMOB 150 and Monster Energy Company v Pacific Smoke International Inc., 2024 TMOB 211: In two separate seemingly unrelated cases before the Trademarks Opposition Board, the applicant (or their agent) filed written submissions relying on hallucinated cases, including one citation in common to a TMOB decision that did not exist. (It was possible that the two applicants, represented by different agents, were using the same LLMs resulting in the same error.) The TMOB reminded the applicants that whether accidental or deliberate, relying on false citations is a serious matter. The citations were disregarded by the Board.
Judicial and Administrative Use of AI
Entreprises Bertrand Roberge ltée c Giroux, 2025 QCCS 4157: While this case was not about AI per se, it has been alleged by the defendants on appeal of a $120 million judgment that the judge made extensive use of AI, including citations to earlier cases that do not exist and “verbatim” quotations from testimony that was not given. The alleged hallucinated cases included those relating to entirely different topics of law than the authority for which it was cited. As of March 2026, this case is pending before the Quebec Court of Appeal.
Re Kémy Adé: In an immigration hearing, an applicant was refused permanent residency. The Immigration Department’s refusal letter included a disclaimer that disclosed the use of generative AI to support processing of refusals, though it was also noted the content was verified by an officer and that AI was not used to make or recommend a decision. The applicant has alleged that there were hallucinations in the refusal letter, including the job description and other facts alleged to be relied on by the Immigration Department. The applicant has sought a reconsideration.
Official Sources on the Use of Generative AI
Federal Court: Interim Principles and Guidelines on the Court’s Use of Artificial Intelligence
Federal Court: Notice to the Parties and the Profession on The Use of Artificial Intelligence in Court Proceedings
Ontario Superior Court of Justice: Practice Directions on the Responsible Use of Artificial Intelligence in Court Proceedings
Alberta Courts: Notice to the Public and Legal Profession: Ensuring the Integrity of Court Submissions When Using Large Language Models
Trademarks Opposition Board: Use of AI in proceedings before the Trademarks Opposition Board
Law Society of Ontario: Licensee’s use of generative artificial intelligence
College of Patent and Trademark Agents: Generative Artificial Intelligence (GenAI) in Patent and Trademark Agent Practices – Ethical and Practical Considerations
Government of Canada, The Artificial Intelligence and Data Act (AIDA) – Companion document, January 2025
Government of Canada, Voluntary Code of Conduct on the Responsible Development and Management of Advanced Generative AI Systems, September 2023

