top of page

AI’s Impact on Political Elections and What California is Doing to Mitigate Misinformation


Photo by Phil Hearing on Unsplash


2024 is the first presidential election year in which Artificial Intelligence (“AI”) has widespread influence in making and distributing information on candidates and electoral processes. With AI-generated political content circulating social media platforms such as TikTok, Instagram, and X (formerly known as Twitter), unsurprisingly, some states like California have enacted new regulations on AI to mitigate its potential to mislead voters. Specifically, Governor Newsome enacted three new bills this year. 


Advocacy groups, such as Campaign Legal Center, have highlighted the danger of AI-generated political ads due to their ability to create deceptively realistic false content. For example, “Deepfakes [are] manipulated media that depict people doing or saying things they didn’t say or do, or events that didn’t occur—to mislead the public about what candidates are asserting, their positions on issues, and even whether certain events actually happened.” States like California are now realizing that if AI-generated political deepfakes are left unchecked, these materials could distort a voter’s ability to make informed decisions. The new California laws are aimed at censoring deepfakes or alerting users to its presence on social media. 


AI affects voters on both sides of the political spectrum. For example, a deepfake video was published on TikTok depicting Massachusetts Senator Elizabeth Warren (D) saying that Republicans should not be allowed to vote. Additionally, the presidential campaign of Florida Governor Ron Desantis (R) shared a video containing AI-generated images showing former President Donald Trump hugging Dr. Anthony Fauci, former chief medical advisor to President Joe Biden from 2021-2022. Neither of these videos depicted real events that had taken place but were the result of domestically located pranksters or international competitors creating AI-generated content. Since AI’s application in the political realm has been a new phenomenon and largely unregulated up to this point in the United States, reasonable voters could not differentiate truthful information from AI-generated content to make informed political opinions. 


Forming political opinions based on AI-generated content infringes on a voter’s fundamental right to access truthful information. Political disinformation is nothing new to elections here in the United States. With AI being a largely unexplored territory and the upcoming presidential election being one of the most contentious in our nation’s history, “AI-based disinformation could add fuel to the fire if we do not act quickly to safeguard our democracy.” 


California has been at the forefront of AI regulation. In 2019, Governor Gavin Newsom signed Assembly Bill 730 into California law. It was among the first bills of its kind to curb deceptive audio and video media that altered candidates’ words or appearance. Assembly Bill 730 laid the groundwork for three new California bills signed by the Governor this year. Assembly Bill 2655, called the “Defending Democracy from Deepfake Deception Act of 2024,” will force social media platforms to remove altered political content within 120 days of an election. Assembly Bill 2839 “broadly prohibits the distribution of election communications containing certain materially deceptive content.” Assembly Bill 2355, an addition to the Political Reform Act of 1974, requires that campaigns include disclaimers for AI-generated advertisements, which will be in effect for the 2024 presidential election. It’s worth noting that although these bills are California legislation, it applies to federal elections. That means a post may have a disclaimer when viewed in California but not in another state. Assembly Bill 2355 is also the least potent of the trio because it only requires disclaimers, and most social media platforms already have some method of self-policing misinformation—i.e., community notes on X. Assembly Bill 2355 merely extends the requirement to include AI content made specifically for an election by an affiliate of a campaign. 


The Defending Democracy Act is the most sweeping. It requires social media platforms to provide users with a convenient method for reporting “materially deceptive content,” and it requires platforms to remove deceptive content within 120 days of an election. A post is materially deceptive for election purposes if it is “reasonably likely to harm [a candidate’s] reputation or electoral prospects.” The rule also applies to election officials, and posts may be subject to removal if it “is reasonably likely to falsely undermine the confidence in election outcomes.” 


Governor Newsom is confident that this legislation will provide the necessary safeguards to ensure election integrity going forward. He states, “It’s critical that we ensure AI is not deployed to undermine the public’s trust through disinformation—especially in today’s fraught political climate.” These new laws are part of a broader plan for California to pioneer AI regulation. As with any piece of tech regulation, the effects are largely to be determined, but it is clear that new laws aimed at mitigating the harms of misinformation will be paramount to guarding our democracy and its electoral processes. 


Moving Forward: The Implications and Concerns of Regulating AI in Elections

AI regulation as it pertains to elections is well-meaning, serving to promote an informed electorate, safeguard electoral processes, protect candidates, and curb deceptive political discourse. If not dealt with properly, courts could mishandle its implementation, potentially leading to adverse effects on free speech.


While AI regulation in elections has been implemented in other states, the effectiveness of it remains uncertain. Regulation has been enacted in ideologically diverse states, including Minnesota, Texas, and Washington, with advocacy groups petitioning the Federal Election Commission (FEC) to extend it federally. Regardless of this widespread support and application, it is not yet clear how effective these laws will be in preventing election deepfakes. More specifically, it is uncertain how efficiently courts will be able to manage potentially disputed cases that may arise. For example, lawsuits have emerged in Sacramento, contesting two of the three laws. However, it may take the court several days to order “injunctive relief to stop the distribution of the content. [By] then, damages to a candidate or to an election could have already been done” and undermine the laws’ impact. 


In an attempt to mitigate the above concerns, AI regulation might be overly imposed and has the potential of encroaching on people’s freedom of speech if not dealt with properly. While deceptive content is not entitled to extensive constitutional protections, content “cannot be prohibited simply for its own sake.” Any form of restriction requires some independent justification to meet a specified objective. As such, it is up to policymakers to ensure their objectives are articulated to prevent undue restriction on expression. In the absence of extensive court precedent and due to its recent application, it is too early to dictate whether the three California laws passed under Governor Newsom adequately limit erroneous content or whether it allows for people to report and indiscriminately censor any content they dislike—severely limiting free speech and hindering the spread of even non-deceptive information. Time and future engagements with the 2024 election will shed more light on this concern.


*The views expressed in this article do not represent the views of Santa Clara University.

Comentários


bottom of page