CLC’s Trevor Potter Testifies at Senate Hearing on AI in Elections 

Image
Trevor Potter sitting behind a desk speaking into a microphone
Trevor Potter testifying before the U.S. Senate Committee on Rules and Administration on September 27, 2023 in Washington, D.C. Photo by Martin H. Simon

Federal campaign finance laws uphold the fundamental right of voters to meaningfully engage in the democratic process, including by ensuring voters know who is paying to influence our elections and our government when deciding who to vote for. 

But our elections are also at risk from bad actors that seek to manipulate and deceive voters. Artificial intelligence (AI), a rapidly developing technology with game-changing potential applications, heightens these risks.  

On September 27, the U.S. Senate Committee on Rules and Administration held a hearing on “AI and the Future of our Elections.” Campaign Legal Center’s (CLC) President and Founder Trevor Potter, a Republican former Commissioner and Chairman of the Federal Election Commission (FEC), was one of five witnesses who testified at the hearing.  

Potter’s testimony focused on how AI tools could be used to easily generate and spread political communications that are deceptive or fraudulent, and the urgent need for policymakers to address the potential impacts of this technology on our democracy.  

Candidates, PACs, and outside groups spend billions of dollars each election cycle on political communications, which voters have to parse through before casting their ballots. Artificial intelligence has the potential to complicate this — if not make it nearly impossible — because the technology has an unprecedented ability to easily create realistic, but false, content: Ads that look and sound genuine but are actually distorted or even entirely fabricated.  

Already, reports indicate that AI has been used to mimic the sound of a candidate’s voice, fabricate an image of a candidate hugging a polarizing figure, and even as part of a foreign government’s disinformation efforts.  

If left unchecked, the use of AI to deceive voters could make it more difficult, or even impossible, for voters to evaluate the election ads that they see and hear — highlighting why federal policymakers need to act now.   

Candidates and campaigns could also face an uphill battle in sharing their desired messages with voters. If voters can no longer tell whether what they’re seeing is real or not, a candidate could be confronted with AI-generated false ads at crucial moments in their election campaign — and this problem could affect candidates of any political or ideological persuasion. 

AI also heightens the risk posed by hostile actors, both foreign and domestic, who seek to use disinformation to manipulate our elections — at the expense of our nation’s democratic process.  

In his testimony, Potter outlined CLC’s three proposals for how Congress should address the risks presented by AI’s use in our elections:  

1. Congress should augment the FEC’s authority to protect our elections against fraud. Candidates are already prohibited from fraudulently speaking for other candidates or parties on a matter that is “damaging” to those entities. This provision should be expanded to preclude all fraudulent misrepresentation — regardless of who is speaking and whether the matter is damaging — including through the use of AI.   

2. Congress should also pass a new law prohibiting the deceptive or fraudulent use of AI in elections, which would rest on a firm constitutional footing.  

3. Congress should expand existing campaign finance disclosure requirements to include disclaimers on the face of any ad using AI, which would inform viewers when electoral content has been materially created, altered, or disseminated with the help of AI. This would promote transparency and provide voters with the necessary information to evaluate political communications made with AI that are aimed at shaping their voting decisions.  

Given that artificial intelligence is subject to change as a burgeoning technology, these proposals are not exhaustive. They are meant to be a foundation from which policymakers can begin to respond to the electoral risks associated with AI.  

Moreover, Congress must establish basic rules for the use of AI in elections without regard for partisanship or political gain. If left unregulated, this technology could increase the risk of misinformation and distrust on both sides of the aisle. With appropriate guardrails, Congress can move us closer towards safeguarding a stronger democracy that works for all. 

Janel is a Media Associate, Campaign Finance, Ethics at CLC.