When the Algorithm Gets Your Name Wrong: AI Defamation, Fake Doctors, and the Emerging Duty of Care
By Steven C. Fraser, Esq. | FL Bar No. 625825 | DC Bar No. 460026
Two lawsuits filed in early 2026 are doing something courts have been slow to accomplish: forcing a direct confrontation between AI-generated content and legal accountability for the harm it causes.
The cases are different in character. One involves a celebrated musician whose reputation was destroyed by a confused algorithm. The other involves a chatbot that told users it was a licensed psychiatrist. But read together, they sketch the contours of a liability doctrine that is forming right now — and they deserve the attention of anyone who deploys AI, relies on it, or finds themselves subject to what it says.
The MacIsaac Problem: When AI Confuses Two People With the Same Name
JUNO Award-winning Cape Breton fiddler Ashley MacIsaac is not a convicted sex offender. He is not listed on any sexual offender registry. He has not been convicted of any crime related to children or sexual assault.
Google's AI Overview said otherwise.
According to a statement of claim filed in February with the Ontario Superior Court of Justice, Google's AI Overview feature — the AI-generated summary that appears prominently at the top of search results — falsely identified MacIsaac as a convicted sex offender, attributing to him convictions and registry listings that belong to an entirely different individual who shares the MacIsaac surname in Atlantic Canada.
The real-world consequence was immediate. A concert in Shubenacadie, Nova Scotia was cancelled. MacIsaac is seeking $1.5 million in damages.
The legal theory is worth examining closely. The filing alleges not just that Google published false information, but that the AI Overview constitutes a defectively designed product — that Google, as its creator and operator, "knew, or ought to have known, that the AI Overview was imperfect, and could return information that was untrue."
That framing matters. It moves the analysis from publication liability — which has significant Section 230 complications in the U.S. — toward product liability, where the question is whether the design of the system itself was unreasonably dangerous. Whether Canadian courts accept that theory will be closely watched. The Ontario court is not bound by U.S. precedent, and its analysis of AI "defective design" may influence how American courts approach similar claims going forward.
The Character AI Case: Practicing Medicine Without a License
A month later, on May 1, the Pennsylvania Department of State filed suit against Character Technologies in Commonwealth Court, alleging that one of its AI chatbots was engaged in the unauthorized practice of medicine.
A state investigator encountered a chatbot named "Emilie" that described itself as a doctor of psychiatry, claimed to be licensed in Pennsylvania, and provided a license number — which turned out to be invalid. The Commonwealth's position: regardless of disclaimers buried in the platform, when an AI holds itself out as a licensed medical professional and provides psychiatric guidance, it is practicing medicine without a license, and the company deploying it is liable.
Character AI's response was predictable: these are fictional characters intended for entertainment and roleplaying, and the disclaimers make that clear.
Governor Shapiro framed the state's position succinctly: people should not be misled into believing they're receiving advice from a licensed medical professional. That's a consumer protection argument as much as a licensing one, and it sidesteps the "it's just entertainment" defense by focusing on the reasonable perception of the user who doesn't read the fine print.
What These Cases Have in Common — And Where They Diverge
Both cases reject the notion that AI outputs are consequence-free. Both impose accountability on the company that builds and deploys the system. But they're testing different legal theories:
MacIsaac v. Google is fundamentally a reputation case — the harm is what the AI said about a specific person. The novel argument is defective design: not that Google is a publisher who posted false content, but that the system itself was engineered in a way that made this kind of error foreseeable and insufficiently guarded against.
Pennsylvania v. Character Technologies is a regulatory case — the harm is what the AI claimed to be. The state isn't primarily suing for damages to an individual; it's enforcing a licensing regime and asserting that an AI cannot do what a licensed professional does, regardless of how the company characterizes the interaction.
The "Reasonable User" Problem
American courts are in the process of developing a working standard for AI liability, and the tension is becoming visible.
In May 2025, a Georgia court granted summary judgment to OpenAI in a defamation case, reasoning that a reasonable user would understand that ChatGPT might generate false information — that this is, in effect, a known limitation of the medium.
The MacIsaac case challenges that framework directly. Google's AI Overview doesn't appear in a context where users expect experimental outputs. It appears at the top of a search result — the most authoritative, most trusted interface most users interact with every day. When Google's interface presents a summary as the answer to a search query, it is not presenting itself as a probabilistic language model that might be wrong. It is presenting itself as Google.
The reasonable user expectation for ChatGPT and the reasonable user expectation for a Google search result are not the same thing. A court that conflates them is importing a standard from one context into a factually distinct one.
If the MacIsaac case produces a ruling — in Canada or in any subsequent American litigation with similar facts — that AI Overviews in search results are held to a higher standard of factual accuracy than conversational AI tools, the liability implications for any company whose AI surfaces in a high-trust context would be substantial.
What This Means for Practitioners and the Businesses They Advise
The doctrine is forming. Here is what seems increasingly clear:
Deployment context matters. An AI embedded in a search result carries different expectations than a disclosed roleplay chatbot. Courts and regulators are beginning to recognize this distinction.
"The AI might be wrong" disclaimers have limits. They may not protect against product liability claims, and they are unlikely to satisfy regulators when an AI is actively misrepresenting its credentials to users.
Defective design is a viable theory. If a system is designed in a way that makes foreseeable errors likely — confusing people who share surnames, generating plausible-sounding but false credentials — the company building it may face liability that Section 230 and "it's just AI" defenses cannot fully absorb.
Real-world harm is the trigger. Courts move when there are concrete damages — a cancelled concert, a patient who received psychiatric "advice" from a program with no license. Abstract concerns about AI accuracy don't generate lawsuits. Identifiable, documented harm does.
These cases are early. The doctrine they produce will take years to mature. But the direction of travel is becoming visible: AI outputs that harm real people, in contexts where those people had reasonable grounds to trust the output, are not going to remain outside the reach of tort law.
Steven C. Fraser has practiced law in Florida and the District of Columbia since 1998, with a practice spanning bankruptcy, consumer protection, executive employment, criminal defense, estate planning, and technology-adjacent litigation. He is a Florida Supreme Court Certified Mediator licensed across all 20 Florida circuits and DC Superior Court.
📞 877-862-7188 · 904-600-0838 · 202-417-8128 📅 Schedule a Consultation
FL Bar No. 625825 | DC Bar No. 460026