top of page

AI Lawsuits Registry

As AI's capabilities have exploded, so have the legal challenges against its creators and users. Everyone, from tech giants to everyday users, is finding themselves in a potential legal minefield. It’s a wild west of intellectual property debates and privacy concerns, with lawsuits that could redefine what it means to create and own content in the digital age. The cases we've seen—each a snapshot of this tension—are just the start. As we navigate this terrain, the outcomes will signal not just the future of AI, but also the values we carry forward in this brave new tech-laden world.

Read below to learn about some of the most significant lawsuits that have been filed around generative AI thus far - the concerns, the legal claims, and in some cases the counter-arguments. As these cases are resolved, they will set precedent for the lawsuits to come. 

The count of AI-related lawsuits has already hit triple digits. Rather than summarize them all here, we recommend this useful resource compiled by GW University - the AI Litigation Database. Some of the most commonly cited issues in these lawsuits are transparency, copyright infringement and accountability.

Note: If you are looking for information on AI Registry capabilities, please see our Solutions page for more information.

Concord Music Group, Inc. v. Anthropic PBC

Issue(s): Copyright infringement
Date:    Oct. 2023

A consortium of music publishers, including industry heavyweights like Universal Music and Concord Music Group, have taken Anthropic PBC, an AI startup, to court. The crux of the lawsuit? Allegations of rampant, systematic misuse of copyrighted song lyrics by Anthropic in its AI model development. Filed in the Middle District of Tennessee, the suit accuses Anthropic of illegally copying and distributing a vast array of copyrighted lyrics, among other works, without permission. The publishers, embracing AI's potential but insisting on ethical usage, argue that Anthropic's practices are a clear violation of copyright principles. Their grievance highlights the unauthorized use of lyrics from a range of artists, from the Rolling Stones to Katy Perry, in Anthropic's AI systems. The plaintiffs are demanding compliance with established copyright laws, akin to the standards followed by other tech entities, while pressing for various copyright infringement claims under the DMCA. This lawsuit underscores the growing tension between innovative tech developments and the rigorous enforcement of intellectual property rights in the digital age.

Authors Guild, et al. v. OpenAI, Inc.

Issue(s): Copyright infringement
Date:    Sept. 2023

The Authors Guild and celebrated writers like John Grisham and George R.R. Martin are taking OpenAI to court. Their accusation? OpenAI's large-scale, unauthorized use of their works in its language models, an action they claim undermines the rights and livelihoods of fiction writers everywhere. The lawsuit, filed in the Southern District of New York, alleges that OpenAI's technology, which generates human-like text responses, heavily relies on datasets that include the texts of these authors' books without their consent. This lawsuit pivots on the argument that OpenAI's models not only replicate but can also potentially replace original creative content, posing a direct threat to fiction writers' financial well-being. The models are capable of producing derivative works – outputs that mimic, summarize, or paraphrase the original works, thereby diluting their market value. The plaintiffs are pursuing claims of direct, vicarious, and contributory copyright infringement, highlighting a critical debate at the intersection of AI technology and intellectual property rights in the creative industry.

Chabon v. OpenAI, Inc.

Issue(s): Copyright infringement
Date: Sep 2023

A group of renowned authors, including Michael Chabon and Ayelet Waldman, have launched a legal battle against OpenAI, representing themselves and other authors whose copyrights they claim have been infringed upon. The lawsuit, filed in Northern California's federal court, centers on the allegation that OpenAI has used their copyrighted works to train its GPT models, which power the ChatGPT product. The crux of the authors' complaint is that ChatGPT, when prompted, can produce not just summaries but also intricate analyses of themes from their works – a capability they argue could only stem from the use of their copyrighted material in training the AI models. They contend that they never gave consent for their works to be used in this manner. This lawsuit brings to the forefront issues about AI and copyright law, especially highlighting the tension between technological advancement and the protection of intellectual property, as the plaintiffs underscore the commercial gains OpenAI allegedly enjoys from this purportedly unauthorized use of their creative content.

J.L., C.B., K.S., et al., v. Google LLC

Issue(s): Unfair competition, negligence, privacy invasion, and copyright infringement and other causes of action
Date: Jul 2023

A group of individuals, choosing to remain anonymous (instead using the initials J.L., C.B., K.S., P.M., N.G., R.F., J.D., and G.R.), have filed a case against Google in a California federal court. The core of their allegation? Google's purported practice of appropriating web-scraped data and private user information from its own products to develop AI technologies, including its Bard chatbot. The plaintiffs claim that Google has been secretly harvesting a wide range of their data – personal and professional details, creative works, photographs, and even emails – without their knowledge or consent. This action, they argue, constitutes a series of legal violations, including unfair competition, negligence, privacy invasion, and copyright infringement. The case puts a spotlight on the increasingly contentious issue of data privacy and ownership in the era of advanced AI, challenging the ethical boundaries of tech giants in their pursuit of innovation.

Silverman, et al. v. OpenAI, Inc.

Issue(s): Copyright infringement, breaches of the Digital Millennium Copyright Act, unjust enrichment, unfair competition laws, and negligence
Date: Jul 2023

In a new legal challenge mirroring an earlier one by authors Paul Tremblay and Mona Awad, celebrities including comedian Sarah Silverman, alongside writers Christopher Golden and Richard Kadrey, are suing OpenAI, the developer of ChatGPT. Their lawsuit alleges a range of violations, including direct and vicarious copyright infringement, breaches of the Digital Millennium Copyright Act, unjust enrichment, as well as violations of both California and common law unfair competition laws, and negligence. The basis of their lawsuit is straightforward yet significant: the plaintiffs, all authors, have not consented to the use of their copyrighted books for training OpenAI's ChatGPT. Despite this, they claim their creative works were used to develop the AI technology. This case adds to the growing legal scrutiny around AI development practices, particularly how tech companies use copyrighted material in their AI models, underscoring the tension between innovation in AI and the protection of intellectual property rights.

Kadrey, et al. v. Meta Platforms, Inc.

Issue(s): Copyright infringement
Date: Jul 2023

Similar to but separate from Silverman et al v. OpenAI, Inc., the same group of plaintiffs, including Sarah Silverman, Christopher Golden, and Richard Kadrey, have filed a lawsuit against Meta Platforms in a Northern California federal court. The suit, filed on July 7, accuses the tech giant, known for Facebook and Instagram, of violating copyright laws through its development and use of LLaMA, a set of large language models. The plaintiffs' central allegation is that Meta Platforms incorporated their copyrighted books into LLaMA's training datasets. These books, according to the suit, were part of a compilation by EleutherAI, a research organization, which Meta allegedly copied and used for LLaMA's development. This lawsuit highlights the ongoing legal debates surrounding the use of copyrighted material in AI model training, placing a spotlight on how major tech companies navigate intellectual property laws in the rapidly evolving field of artificial intelligence.

Tremblay v. OpenAI, Inc.

Issue(s): Copyright infringement, violations of the Digital Millennium Copyright Act, and unfair competition
Date: Jun 2023

Authors Paul Tremblay and Mona Awad have filed a lawsuit in a Northern California federal court against OpenAI. Dated June 28, their complaint accuses OpenAI of using their copyrighted texts without permission to train the large language model behind ChatGPT, their generative AI chatbot. This action, they claim, constitutes direct copyright infringement, breaches of the Digital Millennium Copyright Act, and unfair competition. The authors contend that OpenAI intentionally programmed ChatGPT to reproduce parts or summaries of their copyrighted works without giving due credit. They argue that the company is profiting unjustly by developing a commercial product built on these unattributed reproductions. This lawsuit adds another layer to the growing discourse on the ethics and legality of AI development, especially regarding the use of copyrighted content in training AI systems.

Plaintiffs P.M., K.S., et al. v. OpenAI LP, et al. 

Issue(s): Violations of the following federal laws: The Electronic Communications Privacy Act; The Computer Fraud and Abuse Act; Violations of the following state laws: California’s Invasion of Privacy Act and Unfair Competition law; Illinois’s Biometric Information Privacy Act, Illinois’ Consumer Fraud and Deceptive Business Practices Act; and New York General Business Law s. 349; negligence, invasion of privacy, intrusion upon seclusion, larceny/receipt of stolen property, conversion, unjust enrichment, and failure to warn causes of action
Date: Jun 2023

UPDATED (Sept. 15, 2023): The plaintiffs moved to voluntarily dismiss their case against OpenAI and Microsoft without prejudice. (This suggests that the parties reached an agreement out of court.) In a significant legal challenge, over a dozen young individuals have brought a lawsuit against OpenAI and its backer Microsoft, alleging the improper use of personal data in developing AI products like ChatGPT, Dall-E, and Vall-E. Filed on June 28, the complaint accuses the companies of extracting vast quantities of personal data, including sensitive information from minors, without informed consent or knowledge, to enhance their AI programs. The lawsuit charges OpenAI with multiple legal violations, including breaches of The Electronic Communications Privacy Act, The Computer Fraud and Abuse Act, along with state-specific laws like California's Invasion of Privacy Act and Unfair Competition law, Illinois's Biometric Information Privacy Act, and New York General Business Law s. 349 (prohibiting deceptive acts and practices). Additionally, the plaintiffs claim negligence, invasion of privacy, intrusion upon seclusion, larceny/receipt of stolen property, conversion, unjust enrichment, and failure to warn. This case underscores growing concerns about data privacy and the ethical implications of AI development, particularly involving vulnerable groups like minors.

Walters v. OpenAI LLC

Issue(s): Libel
Date: Jun 2023

Mark Walters has taken legal action against OpenAI, claiming that ChatGPT caused him reputational harm by providing a journalist with incorrect information. According to Walters, the misinformation led to a news article by journalist Fred Riehl incorrectly stating that Walters was implicated in a federal civil rights lawsuit for fraud and embezzlement. Walters contends that he was neither a plaintiff nor a defendant in the case, and alleges that every fact about him presented in the ChatGPT-generated summary was false, constituting libel. This case adds a new dimension to the debate over AI and misinformation, highlighting the potential legal consequences when AI-generated content is erroneous.

Young v. NeoCortext, Inc.

Issue(s): California’s right of publicity law
Date: Apr 2023

Kyland Young, reality TV personality, has launched a proposed class action lawsuit against the creators of the "deep fake" app Reface, NeoCortext, Inc. Young alleges that the app infringes on California's right of publicity law by allowing users to digitally graft their faces onto those of celebrities without obtaining consent from the depicted individuals. Filed in April in a California federal court, Young's complaint accuses NeoCortext of profiting from the unauthorized use of his and others' likenesses to promote their paid service. NeoCortext, on the other hand, is seeking dismissal of the case, arguing that Young's claim of a right of publicity infringement doesn't hold up. Furthermore, they contend that any such claim would be overridden by the protections of the Copyright Act and the First Amendment. This case puts a spotlight on the legal tensions surrounding deep fake technology and the balance between innovation, privacy, and the rights of public figures.

Flora, et al., v. Prisma Labs, Inc.

Issue(s): Illinois data privacy law violation
Date: Feb 2023

UPDATED (Aug. 8, 2023): A Northern District of California judge ruled in favor of Prisma Labs, pushing the dispute towards arbitration. Prisma Labs, the creator of the Lensa A.I. image app, is facing a class action suit over its handling of user biometric data - namely facial geometry - used for avatar creation. The plaintiffs allege that Prisma didn't adequately inform users about the storage and destruction of the biometric data the app captures, a violation of Illinois data privacy laws.

Getty Images (US), Inc. v. Stability AI, Inc.

Issue(s): Copyright and trademark infringement and dilution, unfair competition, deceptive trade practices
Date: Feb 2023

In parallel to their London lawsuit (see below), Getty Images has filed a suit in the United States, specifically in the District Court for the District of Delaware, encompassing a series of allegations centered around copyright and trademark law violations. These U.S. allegations accuse Stability AI of direct copyright infringement, particularly pointing to the alleged use of over 12 million copyrighted images from Getty's collection to train Stability AI's models. Getty also accuses Stability AI of removing or altering copyright management information, which is a violation under 17 U.S.C. § 1202. Getty claims that Stability AI's outputs sometimes bear a modified Getty watermark, potentially causing brand confusion and suggesting a false association with Getty. The complaint also mentions issues of trademark infringement and dilution, pointing to the varied quality of images generated by Stability AI's model, some of which are described as bizarre or grotesque, possibly tarnishing Getty's brand reputation. On top of copyright and trademark claims, the suit further includes claims of unfair competition and deceptive trade practices. Stability AI, in response, has filed a motion to dismiss, challenging the jurisdiction of the Delaware court. They argue that Getty's complaint does not establish that any alleged infringement occurred within Delaware and suggests that the training of the Stable Diffusion model happened in Europe, primarily England and Germany, as indicated by references to LAION, a German entity mentioned in Getty's complaint. Stability AI also contends that Getty has not demonstrated that Stability AI Ltd. has engaged in any business transactions in Delaware that would subject them to the state's jurisdiction. This case highlights the complex legal challenges arising in the era of AI-generated content and the cross-border nature of digital technology and intellectual property law.

Getty Images (London), Inc. v. Stability AI, Inc.

Issue(s): Copyright and Trademark infringement
Date: Jan 2023

Shortly after the Andersen class-action lawsuit was initiated (see below), Getty Images launched a legal battle against Stability AI in London's High Court of Justice. While the specific details of the complaint have not been disclosed publicly at the time of the update, the essence of Getty's claim is that Stability AI unlawfully replicated millions of Getty's copyrighted images and metadata. A notable point of contention is the claim that images generated by users of Stability AI's Stable Diffusion carry a watermark similar to that of Getty Images. Getty argues that in some cases, this watermark was altered and used on synthetic images that could potentially harm the company's reputation, especially if those images are of a bizarre or distasteful nature. This set of claims underscores the legal challenges posed by AI-generated content and the protection of intellectual property in the digital age.

Andersen, et al. v. Stability AI LTD., et al.

Issue(s): Copyright infringement, unfair competition, and right-of-publicity
Date: Jan 2023

A group of artists filed a class-action lawsuit against Stability AI, DeviantArt and Midjourney over copyright infringement, unfair competition, and the right-of-publicity. They are accusing these companies of using their artwork without permission to train artificial intelligence image generators like Stable Diffusion, resulting in the creation of what the plaintiffs deem "infringing derivative works." The dispute hinges on whether AI-generated images, crafted after training models with existing artwork, constitute new creations or are unauthorized derivations of the original works. The artists argue that their copyrighted art was appropriated illegally to develop AI systems capable of producing similar images, which they say violates their rights and undermines their ability to profit from their own creations. The complaint also involves an allegation of violating 17 U.S.C. § 1202, which relates to the alleged removal or manipulation of copyright management information, possibly obscuring the original source of the artwork used to train the AI. Plaintiffs claim unfair competition as well, alongside violations of their statutory and common law right of publicity, especially concerning the AI's capability to generate works "in the style of" specific artists, which they say uses their personal brand without authorization. A breach of contract claim is specifically aimed at DeviantArt, accusing it of not adhering to its own terms of service and privacy statement, presumably in the way it has handled user content and privacy. Lastly, they add to their list of grievances the charge of unfair competition. Stability AI contested the claims, asserting that training their Stable Diffusion AI model with billions of publicly accessible internet images doesn't equate to copying or retaining images for redistribution. Stability AI maintains that the model does not store any of the images it was trained on, suggesting that the process of training an AI model is distinct from producing and distributing copies of the works. DeviantArt, recognized for its online art community and AI-generated art tool DreamUp, sought to have the charges dismissed, particularly targeting the right-of-publicity claims. They argue that the potential for DreamUp to create art should be protected as free speech, thus falling under the protective umbrella of the California anti-SLAPP statute, which is designed to prevent lawsuits that may be intended to chill lawful expression. Interesting Note: This lawsuit was filed by the same legal team that filed the Copilot class-action lawsuit also noted below Unlike the Copilot case, this lawsuit specifically alleges copyright infringement. The legal action further includes accusations of vicarious copyright infringement, which hold the defendants responsible for the copyright-infringing activities of their tools' users. This case stands as another significant challenge to the legal boundaries and ethical considerations regarding the use of existing creative works in the burgeoning field of AI-driven content creation. It touches upon various crucial aspects, such as the integrity of copyright management, the rights of artists over the use of their style and brand, and the contractual obligations of platforms that facilitate the creation of AI-generated art.

J. Doe 1, et al v. GitHub Inc., et al

Issue(s): breach of contract, removal of copyright management information, privacy-related claims, wrongful interference with the plaintiff’s business interests and expectations, and claims of fraud, false designation of origin, unjust enrichment, and unfair competition
Date: Nov 2022

The pioneering lawsuit in the realm of generative AI involves GitHub's Copilot tool, which is co-developed by GitHub, Microsoft, and OpenAI. Unlike many subsequent cases that are heavily rooted in copyright infringement claims, this class-action lawsuit emphasizes breach of contract and privacy issues. The plaintiffs contend that Copilot, by offering code suggestions drawn from public software repositories on GitHub, fails to adhere to the stipulations of the open-source licenses for that code. These licenses often require specific actions like proper attribution, inclusion of copyright notices, the reproduction of license terms, and in some cases, the expectation that derivative works will also be open source—all of which the plaintiffs claim are not being met by Copilot. Additionally, the lawsuit alleges violations of copyright law under 17 U.S.C. § 1202, focusing on the supposed removal of copyright management information, which typically serves to protect and inform about the ownership and terms of use of copyrighted content. The complaint doesn't stop there; it also accuses GitHub of improperly handling "personal data" and "personal information," terms that are increasingly becoming significant in the era of data privacy and security. There's a claim of wrongful interference with plaintiffs' business interests and expectations, likely relating to the way Copilot could be impacting their commercial activities or standing in the marketplace. Lastly, the lawsuit includes accusations of fraud, false designation of origin—which can undermine the brand or origin of the original creators—unjust enrichment, suggesting that the defendants are unfairly benefiting from the plaintiffs' work, and unfair competition, which may encompass a range of unjust business practices. This case sets a precedent by probing into the intricacies of how AI tools interact with user-generated content and the legal expectations around open-source contributions, highlighting the tension between innovation in AI tools and traditional notions of intellectual property rights and data privacy.

Thomson Reuters Enterprise Centre GmbH et al v. ROSS Intelligence Inc.

Issue(s): Copyright infringement
Date:    May 2020

In an early generative AI-centric case, Thomson Reuters alleges that ROSS copied the entirety of its Westlaw database (after having been denied a license) to use as training data for its competing generative AI-powered legal research platform. Reuters’ complaint survived a motion to dismiss in 2021. Fast forward to the summary judgment phase, and ROSS has argued, in part, that its unauthorized copying/use of the Westlaw database amounts to fair use. Specifically, ROSS claims that it took only “unprotected ideas and facts about the text” in order to train its model; that its “purpose” in doing so was to “write entirely original and new code” for its generative AI-powered search tool; and that there is no market for the allegedly infringed Westlaw content consisting of headnotes and key numbers.

Free AI Risk Assessment

Answer a few simple questions to receive a customized action plan to reduce your AI risk.

Risk Assessment.png
bottom of page