top of page

Jacquemus and the Rise of Virtual Luxury: AI-Generated Marketing


Credit: Jacquemus Le Bambimou


Jacquemus

The French luxury label Jacquemus is known for its imaginative marketing strategies that give the brand a unique aesthetic, distinctive from its competitors. The brand’s founder, Simon Porte Jacquemus, created a vision for the brand to utilize social media as its main marketing outlet. The label’s goal–to create a sense of community and strengthen human connection–is obvious to any visitor to the brand’s Instagram, which is simultaneously a business account and the founder’s personal account. Jacquemus’ marketing team is consistently creating and planning extraordinary events and campaigns, “all of them aesthetically dominating in a way perfectly designed to be posted and go viral on social media.” Recently, the brand launched a campaign using a type of AI-generated content deepfake to trick the viewer into thinking giant “Bambino” bags were rolling down the streets of Paris. The ad was posted on Instagram and reposted on TikTok, where it accumulated over 2 million views. “The unexpected sight immediately drew responses of wonderment and joy, followed by questions of how the brand managed to create car-sized bags and whether this was actually happening in real life.”


Credit: @Jacquemus | Instagram (video content)


Exciting and Efficient Marketing Strategy

Deepfakes are synthetic media that have been digitally manipulated to create an artificial image or video and are often built on recycled, copyrighted content. These creations may sometimes be considered copyright infringement because they often recycle content without the permission of its owner. However, Jacquemus utilized deepfake intelligence by incorporating its own content and created a video of an event that never really happened. The use of AI-generated 3D renderings to portray life-sized, designer bags driving down the street in Paris was not only an exciting idea, but an efficient strategy for its marketing campaign. A real-life production of an ad like this, if at all possible, would have cost Jacquemus extensive time, money, and resources to create an eight second video. The viral Instagram and TikTok video left many in awe of Jacquemus’ creation and sparked debates about the existence of the bags. This strategy of using AI deepfakes to create realistic events is quickly becoming a game-changer for marketing teams.


Fake Content and the Law

Although Jacquemus received a lot of praise for their Paris campaign, they also received criticism from those who believed that Jacquemus wrongfully deceived their audience. Can the government prohibit deepfakes or force people and businesses to disclose when AI generated content is being used in advertising? Apparently, there is no simple answer. The federal government and state governments are having difficulties regulating content that is not real, such as deepfakes. Statutes and case law sometimes contradict one another on how fake content should be controlled. However, a general rule can be synthesized that the fake content must be harmful in order to be regulated.


This issue came up in the Supreme Court case, United States v. Alvarez, where a divided Supreme Court held that the First Amendment prohibits the government from regulating speech solely because it is a lie. This ruling reflects the delicate balance between protecting freedom of expression and mitigating potential harm caused by false information. In the case of AI-generated content, the challenge lies in discerning when the deception within deepfakes extends to harmful intent or injurious consequences, warranting legal intervention. In addition to federal case law, the Federal Trade Commission Act of 1914 prohibits unfair and deceptive acts within commerce; which is still applicable today – in the use of AI marketing.


Do deepfakes cause sufficient harm under existing law to justify regulation? If they are deemed libelous or maliciously deceptive, they could potentially be subject to regulation under current legal standards. Black’s Law definition of libel is “a defamatory statement published through any manner or media. If intended to simply bring contempt, disrespect, hatred, or ridicule to a person or entity it is likely a civil breach of law.” Legal recourse is available for plaintiffs if a speaker knowingly spreads false information to damage someone else’s reputation.


Successful libel claims allow for the potential remedy of defamatory harm committed to individuals, public figures, or entities. For example, deepfakes have a reputation of being a medium for libelous actions in politics. Deepfakes were recently used to create a fake video of Vladimir Putin giving a scripted speech to America. Any message can be inserted reflecting a realistic video. This ability will likely be harmful to the media because viewers may not be aware if the video is artificially manipulated. Since deepfakes are almost impossible to recognize as false representations, artificial intelligence-generated content must be regulated or labeled as a fake video.


Some states have begun taking steps to address the issue by enacting legislation that restricts the distribution of AI-generated content. In California, AB 730 was passed to prohibit “materially deceptive audio or visual media showing a candidate for office…with the intent to injure the candidate’s reputation or deceive voter(s).” While this law presently focuses on political campaigns, it sets a precedent for defining materially deceptive visual media and establishing parameters for regulation, requiring a demonstration of "actual malice" for enforcement. This framework may serve as a model for future legislation governing a broader spectrum of AI-generated content, ensuring accountability and transparency in its use.


Cal. Elec. Code § 20010(a) further clarifies that "materially deceptive audio or visual media" must convincingly appear authentic to a reasonable person and significantly alter a person's understanding or impression of the expressive content when compared to unaltered content. These definitions provide a solid foundation for evaluating the deceptive potential of AI-generated media and shaping legislation to navigate the intricacies of this evolving landscape.


What's Next?

Whether to let the viewer know they are watching an advertisement that was generated by AI with the use of synthetic or deepfake technology is an ethical question businesses should consider to avoid backlash. Similar to the visual disclaimer on advertisements “not actual size” which allows consumers to understand the product is being enlarged for marketing purposes, disclaimers on content origination would provide more clarity between the creator of the media and consumers. One possible solution to ensuring transparency for AI-generated content could be “watermarking.” Watermarks for visual AI content may look like transparent or invisible identifications posted directly onto the content to clarify that the content is computer generated. While AI is a useful tool that will transform our society, we must find ways to ensure that people and businesses alike are using it ethically. Google calls watermarking, “a promising technical approach for empowering people and organizations to work with AI-generated content responsibly.” Although watermarking is not a perfect solution, it is a step in the right direction. We could also look at how large corporations are ethically using AI and their ideas for disclosure.


Credit: @Jacquemus | Instagram


Jacquemus Is Safe From Regulation, For Now

In the realm of marketing, it's important to consider the intent and impact of the content. In Jacquemus' case, the AI-generated ad aimed to generate excitement and capture attention, aligning with the brand's innovative and creative ethos. The use of deepfakes served this purpose efficiently, enabling the creation of a visually captivating advertisement without misleading or harming individuals or public figures. The concept of 'actual malice,' often a key element in libel cases, isn't applicable here, as the intention was not to deceive or harm but to entertain and showcase creativity within the fashion industry.


At the same time, the broader legal landscape around AI-generated content remains in flux, with legislatures and courts grappling with the complexities of regulating rapidly advancing technologies. As generative AI continues to evolve, it will become essential to establish clear ethical guidelines that balance the potential benefits of creative expression and innovation with concerns about misinformation and misuse. Striking this balance will be crucial in determining how AI-generated content is treated in various contexts, including marketing campaigns, while upholding principles of transparency, authenticity, and societal well-being.



*The views expressed in this article do not represent the views of Santa Clara University.

Comments


bottom of page