top of page

Explicit AI-Generated Images of Taylor Swift Circulate; Can She Sue for Defamation?

Credit: Ronald S Woan | Wikimedia Commons


As proof of why we can’t have nice things, at least dozens of sexually-explicit, deepfake, AI-generated images of Taylor Swift flooded X (f.k.a. Twitter) this past week. The images were found on the AI celebrity porn website, Celeb Jihad, on January 15th, but quickly made their way to X. Though her fanbase fervently mobilized to report the images and launch the counteroffensive hashtag, #ProtectTaylorSwift, the photos had already spread to millions and millions of users by the time they were taken down. 


In response to the abusive, sexually-explicit images, X’s official @Safety account posted on Saturday: 

“Posting Non-Consensual Nudity (NCN) images is strictly prohibited on X and we have a zero-tolerance policy towards such content. Our teams are actively removing all identified images and taking appropriate actions against the accounts responsible for posting them. We're closely monitoring the situation to ensure that any further violations are immediately addressed, and the content is removed. We're committed to maintaining a safe and respectful environment for all users.”

X has also made Taylor Swift’s name unsearchable on the platform, with a message reading “Something went wrong. Try reloading.”


However, this comes just over a year after reports surfaced that Elon Musk drastically reduced X’s content moderation team, firing at least 3,000 contract workers designated to track hate and battle misinformation. Meanwhile, Musk’s recent promises to hire 100 full-time content moderators in advance of the Senate Judiciary Committee hearing on child safety online have proven to be too little, too late. 


Jeffrey R. Dudas, Ph.D., professor of political science at the University of Connecticut, stated that, "this sort of sexualization is unfortunately common as a means of humiliating or otherwise attempting to discipline high-profile women." Equality Now Digital Law & Rights Advisor Amanda Manyame furthered this sentiment, stating, “Deepfake image-based sexual abuse mirrors the historic patterns of sexual exploitation and abuse experienced by women. Sexual exploitation and abuse in the physical and digital realms operate together in a continuum. They are a part of the same structure of violence rooted in gender-based inequality and systemic misogyny that perpetuates women's subordination in society.”


Rise in Explicit AI-Generated Images.

And unfortunately, Taylor Swift is not the first and will not be the last victim of AI-generated explicit images. In 2018, Scarlett Johansson was the first prominent victim of this abuse, claiming “the fact is that trying to protect yourself from the internet and its depravity is basically a lost cause.” Since then, AI deepfakes for pornographic purposes have drastically increased, even targeting ordinary individuals. Today, porn accounts for 96% of deepfakes online.


The White House responded to this incident, stating that the images are “alarming” and that the president is committed to reducing “the risk of fake AI images.” These images have also brought attention to the “Preventing Deepfakes of Intimate Images Act,” authored by Representative Joseph Morelle (D-NY) in May 2023, and reintroduced this week. 


Meanwhile, the public is calling on Swift’s team to pursue this issue in court. Brittany Spanos, a senior writer at Rolling Stone who teaches a course on Swift at New York University, stated, “This could be a huge deal if she really does pursue it to court.” Similarly, Sarah Klein, an attorney at California-based firm Manly, Stewart & Finaldi, is among those who back a lawsuit from Swift, stating that what happened to Swift "should never happen to any woman. It is abuse, plain and simple. Taylor should definitely take legal action." Amidst the #MeToo movement, the singer countersued a former radio DJ for battery and sexual assault, seeking a symbolic $1 in damages. Her attorney called the award “a single symbolic dollar, the value of which is immeasurable to all women in this situation.” 


Though Taylor Swift has been an advocate against sexual exploitation, her team has yet to comment on whether or not they will be pursuing legal action for the deepfakes. But to many fans, Swift’s words in her song “The Man,” resonate this week more than ever: “I'm so sick of them coming at me again, 'cause if I was a man, then I'd be the man.”


Potential Defamation Suit.


If Swift and her team decide to pursue a defamation suit for the photos, they may be facing an uphill battle. Under federal law, a prima facie case for defamation is made by showing that (1) the defendant made a false statement of fact that was of and concerning the plaintiff, (2) the defendant published the statement to a third party, and (3) the statement injured the plaintiff’s reputation. The defendant must also have the minimum level of intent for the actions, which is a higher bar when the plaintiff is a public figure. 


With regards to potentially defamatory statements online, whether the statement is fact or opinion is considered using the factors in Bauer v. Brinkman. The factors consider the strength of the statement’s meaning, whether it is objectively capable of being proven, as well as the social context around the statement and the context of where it was posted. Misleading or doctored photos can be considered a statement of fact, and the fact that these photos are AI-generated make it objectively provable that they are false. 


Credit: Eva Rinaldi | Wikimedia Commons


With Swift as the central focus of the images, the requirement that the statement be “of and concerning the plaintiff” would not be at issue. Similarly, simply posting the photos online satisfies the requirement that the statements be published to a third party. The injury to Swift’s big reputation would likely be a large focus for any defamation suit she files. Swift’s fanbase encompasses a massive portion of the US population, and a large portion of her fans are on the younger-side and underage. With a significant number of underage fans, sexually explicit images of Swift circulating the internet can certainly be damaging to her reputation. An argument from Swift’s team framing the case as their efforts to protect young children from sexually explicit content is likely to do well in front of a judge. 


Because Swift is undoubtedly a public figure, her legal team would have to prove that the pictures were created and posted with actual malice. In New York Times v. Sullivan, the Supreme Court defined actual malice as “with knowledge that it was false or with reckless disregard of whether it was false or not.” Because the images were AI-generated, it is a strong argument to say that the creator(s) knew they were false when they generated and posted the images. Even though Swift being a public figure raises the bar for this requirement, it would not be one of the major difficulties her team would face in court. 


In order to make a claim for defamation over these generated images, Swift would have to identify who created and posted the images initially. Section 230 protects social media platforms such as X from liability for user posts because they do not generate the content of the posts. To have any hope of actual recourse, Swift’s team would have to identify the poster and name them as a defendant to the suit. Without that, Swift may have a difficult time seeking recovery in the courts for defamation. Working with social media platforms directly to moderate the content would be a more direct approach to the issue. 


AI Adds Complexities and Mirrors Old issues with Defamation Claims. 


The biggest hurdle for Swift’s team with a potential defamation case is likely to stem from the novelty of the issue. While generative AI has been around for decades, its recent surge in popularity has made it a much more powerful and more accessible tool than ever before. With the technology growing, it can be more and more difficult to identify AI-generated images from real ones, making it easier to fool the public with generated photos. Convincing faked images are easier to produce and courts must discern what statements these images are making in relation to defamation claims. Still, doctored images have been around and used to attack and deceive others far longer than AI, repeatedly coming back like a 90’s trend. The tools becoming more accessible may make the issues more common, but does not actually change the fundamental problem of spreading fake images or require significant modification of the legal systems we already have in place.


While Swift may face hurdles with bringing a defamation suit, it is only one avenue of recourse for this bad blood. X’s swift response to the spread of the images shows they are willing to moderate the content and take action to prevent it from spreading gives Swift some initial protections already.


*The views expressed in this article do not represent the views of Santa Clara University.


コメント

コメントが読み込まれませんでした。
技術的な問題があったようです。お手数ですが、再度接続するか、ページを再読み込みしてださい。
bottom of page