Introduction
The art world has always been affected by the development of new technologies. For example, photography was originally not considered to be art and was thought of as a simple mechanical process. However, once it became accepted, the art world adapted and even expanded as art could then move in new, more abstract directions. With the introduction of artificial intelligence image generation, the art world is again faced with legal, moral, and artistic questions.
With the proliferation of AI art, there are questions regarding the output’s artistic merit. Some argue that the work is real because it provokes emotion, which is the basis of all art. Others argue that since the AI work is simply a reflection of prior work it can’t be, and the AI art lacks a “soul” that comes from memory, experience, and emotion. Even with the variety of opinions, AI art generators have continued to grow in number and popularity.
AI art generators have grown exponentially and are now capable of generating complex images in any artistic style, even styles of particular artists if prompted. For example, the artist Rutkowski’s name has been used to generate over 90,000 images. AI art generators can outpace traditional artists; the larger fear is the potential for companies to use generators to create work resembling an artist’s for a fraction of the cost. AI-generated images have been sold at art auctions, including Portrait of Edmond de Belamy, from La Famille de Belamy, which sold for over $400,000.
Portrait of Edmond Belamy, 2018, created by GAN (Generative Adversarial Network). Sold for $432,500 on 25 October 2018 at Christie’s. Image © Obvious
AI generator’s abilities have prompted concern for a wide range of issues, particularly political speech and deep fake pornography. Fake images, videos, or audio of political figures could threaten the democratic process and election stability as the line between what is “real” or not is blurred. Deep fake pornography violates individuals' consent and privacy, and could create a market for simulated child pornography that include photos of actual children (even if simulated child pornography itself is legal according to Ashcroft v. Free Speech Coal., 535 U.S. 234, 122 S. Ct. 1389, 152 L. Ed. 2d 403 (2002)).
A large legal issue is the copyright in the training of the machines and in their output. Many artists are concerned that their livelihood will be threatened by AI art generators. Current litigation should illuminate where AI art generators stand with copyright, but there are no definitive answers as of now.
How it works
Current AI generators operate on a latent diffusion model. First, an AI is trained with images to “learn” what an object is. Information is stored in a latent space, containing highly compressed image and text information that allows the models to operate without needing to store massive amounts of data. Next, each image is diffused with noise, random pixel information, until the image is entirely noise. The process is then reversed, and the generator gradually recreates an image from complete noise. This process is repeated and refined until the generator can become more specific. Finally, a user can use a prompt to direct the AI to generate an image based on this prior training.
Diagram from the class action lawsuit (Compl. ¶ 72), an example of the diffusion model: Top blue shows an input that is diffused from left to right, bottom red shows the generator reversing the process from right to left.
Source data can come from various sources, but the most significant batches come from LAION, a German nonprofit organization that compiles open-access datasets of image and text pairings from the internet for training purposes. The largest dataset, LAION-5B, was funded by Stability Inc. (Stability) and has over five billion image and text pairings. Training can also come from other large sets of image and text pairings, such as from web crawling, licensing agreements, or, as alleged in current litigation, large-scale unauthorized use from websites such as Getty. As the size of training sets grows, so will the capabilities of the AI generators.
Critics of AI art generators claim that the process is “a 21st century collage tool” because they do not independently create images, since the output is entirely dependent on the input images that belong to copyright holders, specifically artists. However, proponents, including a spokesperson for Stability, claim that this is a misunderstanding of how the machines operate since no images are stored, and a collage is therefore impossible.
Current Litigation
There are major lawsuits currently filed against the largest AI art generator companies. A class action lawsuit was filed this year in the Northern District of California. Plaintiffs Sarah Andersen, Kelly McKernan, and Karla Ortiz are suing Stability AI Ltd. and Stability AI, Inc., Midjourney, Inc., and DeviantArt, Inc. for vicarious and direct copyright infringement, violation of the Digital Millennium Copyright Act, violation of rights of publicity, breach of contract, and violation of unfair competition laws.
Getty Images has also initiated a lawsuit against Stability, claiming unfair competition and that Stability’s “infringement of Getty Images’ content on a massive scale has been instrumental to its success to date.” The complaint alleges that over 12 million Getty images were used to train the Stable Diffusion machine. (Getty Compl. ¶1). Getty claims that a version of their watermark appears on generations, and that “incorporation of Getty Images’ marks into low quality, unappealing, or offensive images dilutes those marks in further violation of federal and state trademark laws.” (Getty Compl. ¶8).
Image from Getty Complaint (Getty Compl. ¶52): Left, actual getty image, right generated image with a version of the Getty watermark
Both lawsuits claim the AI generator companies have violated the plaintiffs’ copyrights. In the class action, the plaintiffs claim a violation of their reproduction, derivative, distribution, performance, and display rights. In the Getty case, Getty claims its derivative rights have been violated.
Fair Use or infringement?
Does AI art generators training constitute fair use? If so, the defendants will have an affirmative defense for the copyright violation claims. Fair use claims are very fact specific, so the facts of each case will be highly relevant to the outcomes of the litigation.
According to Section 107 of the Copyright Act, fair use is not an infringement when used “for purposes such as criticism, comment, news reporting, teaching (including multiple copies for classroom use), scholarship, or research”(emphasis added). Four factors to consider are the purpose and character of the use (including if something is commercial in nature); the nature of the copyrighted work; the amount and substantiality of the portion used in relation to the copyrighted work as a whole; and the effect of the use upon the potential market for or value of the copyrighted work. Id.
Whether or not the works are considered “transformative” will be a large step in determining if the generators fall under fair use under the first factor. According to the copyright office, “transformative uses are those that add something new, with a further purpose or different character, and do not substitute for the original use of the work.” Even when based on an artist’s style (which is not copyrightable as a concept or idea), many of the outputs could be considered transformative because of the new elements introduced to the generations. Another argument could be that the very training method, by being based on hundreds if not thousands of images, makes every output a unique work informed by the entirety of its training, and thus transformative because there will always be “something new” added even if similar to an input image.
A factor in the plaintiffs’ favor is the market effect of the AI generators. If the AI generators are considered fair use, the market for an artist’s work could be flooded by free works in their style, effectively eliminating the market for the artist. Defendants may respond that the artist’s original work will still have prestige and a market but that generator users can make work stylized like the artist’s, creating two separate markets. In the Getty case, there may be a stronger argument of a market effect due to the compounding trademark issues and the competition to its own paid service.
Several prior decisions illustrate fair use arguments in cases with large amounts of used copyrighted content. In Authors Guild v. Google, Inc., 804 F.3d 202 (2d Cir. 2015), the fair use defense applied even when copying an entire copyrighted work in order to reach a transformative purpose that did not provide an entire version of a copyrighted work. In the 2021 SCOTUS case, Google LLC v. Oracle Am., Inc., 209 L. Ed. 2d 311, 141 S. Ct. 1183 (2021), Google’s use of preexisting computer code was considered to be fair use because it was ultimately used in a new context. The outcome of the AI cases will be dependent on the court’s determinations on fair use, and if the cases are found to be analogous to the prior case law or if the AI has created a new situation for the courts to consider.
Can generated images be copyrighted?
The current answer seems to be no. The copyright office’s treatment of Zarya of the Dawn is illustrative. Zarya of the Dawn is a comic book written by Kris Kashtanova with images that were AI-generated. Initially, a copyright was granted for the whole work. Upon review, the copyright office rescinded the copyright and reissued it only for the parts of the comic that were “the product of human authorship.”
The copyright office letter also explained that “the Office will not register works produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author”. Id. This will likely be the standard for copyrights in the future; the only copyrightable aspects of work including AI generation are those created by a human hand. AI-generated images will likely be considered products of a mechanical process and therefore not copyrightable if the copyright office maintains this standard.
The Future
The impacts of AI litigation will be far-reaching. If won, there are concerns of a chilling effect “that would just slam the doors on the research” and could hinder further AI development by limiting the training data that can be used. On the opposing side, there is a current push to pause AI generation until its further effects are known, such as with the Future of Life open letter, that asks for a six month pause on AI development in order to establish safety protocols and implement government regulation.
The companies that own AI generators are moving forward in different ways. There are examples of AI art generators being trained specifically on licensed content, such as Adobe Firefly, which was trained on Adobe-owned images, licensed images, and public domain content. Stability AI has also said it will create an “opt-out” option for artists who wish their content not to be used. Critics claim this puts the onus on the artists rather than the companies to prevent infringement. OpenAI has attempted to combat certain issues by prohibiting specific outputs, such as those with politicians, celebrities, and limiting nudity and gore. Legislative action is necessary to fully answer some of the questions presented since current copyright law was not created with AI in mind.
The discussion has also divided the art world, with some fearing that artists will become obsolete and others hoping that AI can usher in a new artistic era. Like photography, will an acceptance of a new technology push art to new and unexpected places? While the litigation on copyright is ongoing, as with the rest of society, the art world will have to grapple with the new reality that the introduction of AI presents.
Comments