Democracy Decoded: Season 4, Episode 5 Transcript
Return to this episode's webpage.
Simone Leeper: Imagine you're a registered voter in New Hampshire preparing to cast your ballot in the presidential primary. Your phone rings and on the line is a familiar voice.
Robo Biden: What a bunch of malarkey. You know the value of voting Democratic on our votes count. It's important that you save your vote for the November election.
Leeper: It sounds like it's a recording from President Joe Biden right before the primary telling you not to vote until November.
Biden: The vote makes a difference in November, not this Tuesday. If you would like to be removed from future calls, please press two now.
Leeper: You may remember hearing about these calls, which were supposedly from President Biden, especially because it wasn't actually him on the phone. A political consultant hired a marketer to create the bogus voice messages using new artificial intelligence tools in order to discourage voters from participating in the primary election. The trick, while ultimately ineffective, made national news.
John M. Formella: Law enforcement across the country is unified on a bipartisan basis and ready to work together to combat any attempt to undermine our elections.
Leeper: Soon the Attorney General of New Hampshire, John M. Formella announced that authorities considered the calls a potential violation of laws that protect the right to vote.
Formella: We are committed to keeping our elections free, fair and secure.
Leeper: The person alleged to have orchestrated the AI robocalls was later indicted in New Hampshire for voter suppression and impersonation of a candidate. He also faces a proposed fine of $6 million from the Federal Communications Commission. For people who track misinformation and disinformation, the fake Biden robocalls seemed like they were coming from a familiar playbook that used new tricks.
Adav Noti: People have been trying to prevent other people from voting for as long as voting has been around, unfortunately, and the tools that have been used to do that have shifted over time.
Leeper: Adav Noti is the Executive Director at Campaign Legal Center.
Noti: Doctored images have been around for a long time. Doctored video has been around for a long time. What's different now? What AI changes is that it is now significantly easier, cheaper, and faster to create very real looking fakes, and that's basically the main threat of AI right now.
Leeper: The AI Biden robocalls are remembered as the event that introduced AI generated fakes into the 2024 election. Like so many early efforts in a new venture, the scheme fizzled, but it got the attention of Adav and other experts as they looked ahead to election day.
Noti: It was a little bit clumsy, but that was just one person with a budget of almost nothing and was able to reach thousands and thousands of primary voters with an untrue and potentially very damaging message. So you can imagine how a sophisticated actor with a real budget could use those same tools to generate much more effective mis- and disinformation.
Leeper: I'm Simone Leeper and this is Democracy Decoded, a podcast where we examine our government and discuss innovative ideas that could lead to a stronger, more transparent, accountable, and inclusive democracy. Like Adav, I work for the nonpartisan Campaign Legal Center. Our organization advocates for every voter in America to be able to meaningfully participate in and affect the democratic process. I'm an attorney on CLC's Redistricting team representing voters in court across the country who want our democracy to be better and more representative.
On this season of the podcast, we're examining American elections. We're taking a deep dive into the tried and tested systems and some of the newly updated laws that ensure our elections are safe, secure, and accurate no matter what challenges they face.
In this episode, we're looking at misinformation, disinformation, and the fast rise of AI to boost these false narratives. Misinformation and disinformation are similar in that both are factually inaccurate. However, the difference is that disinformation is factually inaccurate information meant to deliberately mislead people. The fight for democracy is in part a fight to describe reality to one another and in many areas of our politics right now, bad actors have successfully convinced people of a supposed reality that simply isn't true.
Stephen Richer: My biggest concern regarding information, I suppose, is that people won't accept the results of a fair and accurate and lawful election. Specifically, it started in the 2020 election in believing in the fairness and accuracy and lawfulness of that election.
Leeper: This is Stephen Richer, the recorder of Maricopa County, Arizona. He's a Republican holding one of the top elected positions in a county that's home to four and a half million people.
Richer: I was elected in 2020 in the fateful November 3, 2020 election.
Leeper: The office of the recorder has three main responsibilities. The first is recording public documents like real estate transactions. Second is registering voters.
Richer: And then the third is election administration, and specifically I'm responsible for early voting. Maricopa County has factored in significantly into the national conversation because it's highly competitive inside a highly competitive state. Maricopa County makes up about 62% of the voting population of Arizona, and Arizona has been designated by almost every single political pundit as one of five or six states to watch in the 2024 election, both for the presidential contest but also for control of the United States Senate and control of the United States House. And so a lot of eyes on Maricopa County.
Leeper: We're talking with Stephen because since his 2020 election in a hotly contested county in a hotly contested state, he has been on the front lines in the war against misinformation and disinformation. Often his battleground is on X, the platform formerly known as Twitter. He gives clear replies to people in Arizona who may be asking legitimate election questions and firm explanations to those who may be trying to distort facts.
Richer: Before I took office, I was already receiving a lot of inquiries, a lot of concerns, a lot of theories as to how the 2020 election was either inaccurate or unlawful or was wholesale fraudulent. And so I actually spent much of November 2020 and December 2020 watching lots of Rumble videos or YouTube videos about different theories from people as to the 2020 election and trying to be as knowledgeable about those and respond to as many of those as possible.
Leeper: Since taking office, he has used social media to debunk potential misinformation before it has a chance to take hold. For instance, in February, a voter posted a picture of two mail-in ballots she received to vote in the primary. "Maricopa County at its finest," she wrote with clear sarcasm. Her post was a perfect opportunity for people to get mad or confused about how the primary election was being administered. So Stephen replied with a thorough explanation.
Richer: The reason I chose to respond was because it was a common question and it's tantalizing in that an average consumer could look to something like that and say, "That seems wrong and what's going on here?" And so in this instance it was that the voter had moved shortly before the voter registration deadline, and so two ballot packets were triggered, yes, but the first ballot packet was deactivated and the voter probably could have reasonably guessed that it was because she had changed her voter registration address right before the deadline.
Leeper: And in a rarity for posts on X, more people saw Stephen's reply than saw the original post.
Richer: I like to add context to anyone who is interested in digging deeper because I still think that despite all the efforts over the past few years from election administrators and associated nonprofits and 501(c)(3)'s that we are being outgunned significantly, there are still way more inaccurate information about election administration than there is accurate information. I'm trying to up the firepower on the accurate information side.
Leeper: Stephen has a large following on social media and he has a prominent elected position in the fourth-largest county in America. His platform and his energy are a major force for accurate information. Yet he's still just one voice pushing against a looming problem for democracy. Attempts at misinforming or disinforming, the public are rampant in the digital age and they're a big threat to our system of government.
Mia Hoffman: In the context of election mis- and disinformation is generally used to achieve three broad goals.
Leeper: This is Mia Hoffman. She's a research fellow working on AI governance at the Center for Security and Emerging Technology at Georgetown University.
Hoffman: The first is to sway voter opinions to change their opinions about different candidates. The second is to suppress voting, so to discourage them from going to the polls in the first place, and then also to seed mistrust in electoral processes and also this concept of democratic governance as a whole.
Leeper: To sway people using false information, to discourage people from voting, and to seed mistrust in elections at large. Three goals, all of which hurt democracy. In Mia's view, much of the mis- and disinformation that Stephen is combating work toward the third category, seeding mistrust in elections.
Hoffman: A lot of the narratives from the previous election with the manipulated voting machines or fraudulent mail-in voting really helped to seed mistrust in electoral processes. That was really the target over the past four years. And the internet and social media in particular have really become the main environments in which mis- and disinformation are being shared and consumed, and this includes platforms like Facebook and TikTok of course, but also more kind of niche social media. We don't immediately think about WhatsApp and Telegram and WeChat for instance.
Richer: There are many aspects to harm and it's hard to measure some of them.
Leeper: This is Stephen Richer, again.
Richer: I suppose simply not believing in the system is a harm in and of itself because our form of government is predicated on the belief that the governors really have the consent of the governed. And of course the vehicle for doing that is elections, and that has manifested in some actionable, very real hurts to individuals, and whether it's voters or whether it's people working elections who have received anger, hate, threats.
Leeper: Another major harm, misinformation and disinformation cause: tying up elected officials from dealing with other pressing matters.
Richer: It's been the number one issue here in Arizona for the last three and a half years. Election denialism, and every minute that we spend on that is a minute that we're not spending on homelessness, the border, inflation, education. That those would benefit from additional serious thought and additional serious attention that has unfortunately been allocated to something that is largely rooted in a fiction.
Leeper: Misinformation and disinformation aren't exactly new threats to American democracy. For most of our history, however, for a piece of false information to spread widely, it had to fool people in the traditional news media into printing or broadcasting it. Today, bad actors don't need to trick professional journalists. They just need falsehoods to spread through the digital equivalent of word of mouth.
The newest tool bad actors can use is generative AI. It allows creators to make false audio like the Biden robocall. They can make convincing images showing things that never happened or even entire fake news articles in a number of different languages. AI changes the stakes for mis- and disinformation. The content it can create isn't simply false, it's false evidence of a falsehood.
Hoffman: I think the most commonly known are deep fakes, so kind of fake images or videos, even audio recordings of political candidates or even of election officials.
Leeper: This is Mia Hoffman, again, the AI researcher at Georgetown.
Hoffman: More recently, what we've seen is an increase of deep fakes of polling sites that show, for instance, really long lines or a closed polling site that's supposed to be open or even some kind of security threat, safety threats that are trying to discourage voters from actually going there. Lesser known, but also very common tactics are the creation of bot armies, so fake accounts and social media that post a lot of provocative and controversial content that aims to kind of polarize the discussions on social media. And the third broad type are these phony media sites or news sites that claim to have authoritative information about elections but are actually just AI- generated nonsense.
Leeper: None of this is to say that AI itself is scary per se. It's more easily understood as an evolution of machines or software, tasks that once required a person to do may now use AI.
Hoffman: This includes processing text or producing text or processing images, telling what can be seen in an image. But also making predictions or making recommendations about what movie you'd like to see next. AI is embedded in Netflix, but also in all kinds of self- driving cars, for instance, in your autopilot, et cetera.
Leeper: When we talk about misinformation and disinformation, the types of AI we're concerned with are called generative AI. If you've ever used AI to create images or interacted with a chatbot, you've used generative AI.
Hoffman: AI is really a tool and this tool isn't inherently good or bad or dangerous. The effect of AI on elections really depends on how it is used. And both generative and non-generative AI have useful purposes in election contexts that are non-nefarious. Just think of managing voter rolls or community outreach, which can also be done in more languages thanks to AI. But of course generative AI that produces texts. They can be very useful for propagandists and misinformation peddlers because they really enable operators to produce material at scale at much lower costs than they used to.
Leeper: The falling cost of AI poses a potential threat for elections. Cheaper tools means more bad actors can try more tactics targeting smaller groups. They can use more languages, produce more misleading content and iterate faster to find what works.
Hoffman: AI has given the disinformation game a productivity boost. Overall, the general availability and increased availability of generative AI tools, a lot of the influence campaigns can now follow tactics that used to be cost prohibitive before because they would require a lot of manual labor. This includes personalizing messages and targeting smaller groups in smaller communities that have specific vulnerabilities that can be exploited with specific texts and tailored texts for instance. It also means that they can vary their content more often, produce texts in more different languages, which used to require somebody native in that language to help translate. And all of these things make the disinformation campaigns a lot harder to detect too because we can no longer search for the copy paste text that is posted hundreds of thousands of times on the internet, but we have to search for smaller campaigns that are just as impactful.
Leeper: This ability to produce materials in multiple languages can pose a particular threat for groups on the margins. It puts non-English speakers at a new risk of being targets for AI-generated mis- and disinformation
Hoffman: Immigrant communities in particular are really at a higher risk of harm because these communities, languages and their perspectives aren't necessarily represented in mainstream media. They often retreat to more niche platforms for their news consumption.
Leeper: And once people are exposed to mis- and disinformation on these platforms, it can take root in a way that's hard to counter.
Hoffman: There is a bit of an information vacuum for members of these communities in their native language for how election processes work or what a change in voting policy means or just illustrations of the most common disinformation narratives. There's also a lack of campaign outrage in their native languages. So we have this information vacuum, which creates big room for malicious actors to actually inject disinformation through phony news sources or just media pages that target these communities in their native languages.
Leeper: The rapid advances in AI have made the information environment particularly chaotic in 2024. Adav Noti, CLC's Executive Director has seen mis- and disinformation campaigns aimed at swaying the vote.
Noti: In the pre-election window we've already seen some instances of the use of AI tools to try to influence who votes and maybe how they vote, but more likely who votes and basically to try to dissuade segments of the voting populace from turning out. This is a tactic that has been around, not using AI tools, using prior tools, has been around for several election cycles. It was used very heavily in the 2016 presidential election, for example. Those messages are disseminated in a very targeted way to people that are susceptible to them and who the creators of the message don't want to vote.
Leeper: Adav is looking beyond election day as well.
Noti: In the post election period, I think it will be different. If we see problematic uses of AI after election day, meaning when the votes are being counted, I think that will be more geared towards trying to create doubt or confusion about the results, about the process of generating the results, about how vote counting and election certification work. And the sorts of folks who disseminate those messages are generally trying to destabilize the system either for political reasons or more generally, if we're talking about hostile foreign nations to just create a dissension and confusion among the United States as a whole
Leeper: By threatening elections and creating mass confusion, mis- and disinformation become matters of national security.
Noti: In an ideal world, Congress would be taking steps here to set the boundaries for the use of AI tools when it comes to trying to influence elections, and state legislators and other federal regulators like the Federal Election Commission, the Federal Communications Commission would be doing the same. For the most part though, that has not happened yet.
Leeper: Unfortunately, the speed with which AI is evolving has left Americans to handle a new threat using older laws and regulations.
Noti: There are some significant challenges to addressing AI-generated mis- and disinformation through the law. One is that the technology is fairly new and most lawmakers and regulators don't understand it particularly well yet. So there's sort of a literacy challenge of getting folks up to speed on what the technology is, how it works, how it's different from prior sources of mis- and disinformation, and what makes it particularly in need of regulation.
And there hasn't really been sufficient time for lawmakers to get educated on those issues yet. And then even once they are educated, there needs to be analysis and discussion about what the solution should be, what should the laws say? There are some great ideas going around about that, but there is not a consensus yet on what the laws should look like. None of that is a real barrier to passing laws. It is a barrier to passing laws for the 2024 election.
Leeper: So for this election cycle, given federal inaction, some states have stepped up.
Noti: We're seeing states around the country act through new laws and new rules, everything from requiring labeling of election-related AI material so that if you're exposed to a communication that falls into that category, you at least know that what you're seeing is AI-generated to more aggressive approaches, including restrictions on the sorts of communications that can be disseminated using AI tools.
Leeper: Since January, more than a dozen states have passed laws that address deep fakes and other deceptive media in elections. Another five states passed similar laws before this year.
Noti: The jury is still out on which of those measures will be most successful. These tools are very new. The rules around them are even newer, and we'll have to do an assessment through the election cycle and after of which measures were successful, which ones were less successful, and use that to inform the rules going forward.
Leeper: During this election cycle, voters are counting on professional news media to identify and call out bad actors. So, many state lawmakers and regulators are doing their best to address AI- generated mis- and disinformation, but they can't move as fast as new AI tools.
Noti: Journalists are really the front line of the defense when it comes to the stopping of mis- and disinformation. Individual social media users have a very important role to play in not passing along those false messages. But journalists have the key role of being able to inform the public about what's true and what's not. When something starts to spread, journalists can say, “This is false. This is unambiguously false." Can demonstrate that it's AI generated. That can have a huge impact on how the discourse unfolds.
Leeper: The other institutions with the influence to help curb bad actors are the social media platforms themselves. Since 2016, platforms have tried with varying degrees of dedication to protect users from mis- and disinformation. But today largely users are on their own.
Noti: The role of social media companies in the spread of mis- and disinformation unfortunately is not a happy story, and it's not one that's improving. In some prior election cycles, some of the major social media platforms took steps to prevent the spread of false information about elections. Those measures were not particularly effective, but there were at least some efforts in the 2024 election cycle, most of the major social media platforms have essentially washed their hands of policing that sort of content and are taking a very different approach, which is that they'll let pretty much anything go with only narrow categories that are deemed out of bounds, and they will rely on their users to flag what is false. That approach does not work. That's a big part of the reason that we need laws and rules here.
Leeper: There's one other key player in the spread of mis- and disinformation that has the potential to do better than social media companies and which could perhaps have even more influence than lawmakers or journalists. That's the public. You and me.
Hoffman: Just because a narrative and a conspiracy theory is out there that's not the same as succeeding with that narrative.
Leeper: This is Mia Hoffman again.
Hoffman: And the overall impact of a disinformation campaign depends a lot on how many people, real people are paying attention to them. Remember that we have power over how we consume information and power over choosing what we believe, power over what we share with our families and friends.
Leeper: One risk of even discussing mis- and disinformation can be that it takes over the conversation. AI especially has an aura of mystery to it, perhaps even of inevitability, but it's important to remember we have many effective tools at the ready to counteract it.
Hoffman: People painted a bit of a doomsday scenario in my opinion, people saying, "We're in a post-truth world," for instance, and while there have been efforts to try and create that it hasn't really been super successful or not as successful as probably anticipated. This is something that is an unintended consequence of raising awareness around disinformation and the risks and harms of it. And I'm not trying to downplay these risks. I just want to make sure that people understand not everything is fake and we still have truth, facts and information, and we have the means to verify information, so don't panic, essentially.
Leeper: So what can you do to be part of the solution to mis- and disinformation? For starters, you should consider the source. Where did the information come from in the first place and is it credible? If it's an article from an unknown source, investigate the author. Are they a real person? Consult trusted media and look into whether there are other supporting sources for a story to determine if it's accurate. Taking a moment to double-check information before you share it can go a long way toward combating the problem, as can encouraging your friends to do the same.
Richer: I think that they can encourage people in their social circles to dig deeper if they see something on the internet that's outlandish.
Leeper: If there's one thing Stephen Richer knows a lot about, it's debunking bad information. His strategy, if you have time: take a person seriously when they tell you something that sounds fishy, but then ask them to verify it.
Richer: You don't have to be the friend who's like, "That's really stinking dumb," because I understand that puts you in an awkward social position, but just say, "Are you sure about that? I know that sounds sort of funky, but have you contacted your local election administrator? Have you contacted people who might know more about this? And have you opened up to the possibility that there might be a perfectly logical explanation for something that you saw on the internet or that what you saw on the internet might be completely made up?" And I think that offers at least a different perspective, maybe some new information, and hopefully that can be compelling because there are answers to everything that's posted on the internet regarding election administration.
Leeper: Stephen is uncommonly dedicated to stamping out mis- and disinformation. Even if he has to go tweet by tweet. For a broader approach. He nudges people to take part in this democratic process they're so skeptical of.
Richer: One thing that we always say is encourage people to get involved in elections. Elections aren't just the same group of people operating behind the curtain. Elections are administered by thousands and thousands and thousands of bipartisan teams, many of whom are temporary workers. And so if you don't trust the process, then get involved in the process and be part of the bipartisan teamwork that goes into it.
Leeper: The uncomfortable fact is we will unfortunately always be battling mis- and disinformation in our elections. AI has only made it cheaper and easier than ever for bad actors to inject falsehoods into our electoral process. We all have to be on guard, and many of the people being targeted right now have the least help protecting themselves against it. But even as lawmakers play catch up to these new tech tools, we have the power to work against mis- and disinformation. We can do our part to stop their spread across social media, and we can help friends and family take the extra step to get better information too.
I want to thank Stephen Richer, Mia Hoffman and Adav Noti for appearing in this episode. You can find more background information on the topics discussed in our show notes along with a full transcript of the episode. This season of Democracy Decoded is produced by JAR Audio for Campaign Legal Center. CLC is a nonpartisan nonprofit organization that advances democracy through law at the federal, state, and local levels, fighting for every American's right to responsive government, and a fair opportunity to participate in and affect the democratic process. You can learn more about us and support our work at campaignlegal.org.
I'm your host, Simone Leeper. Thanks so much for listening. If you learned something today, we'd really appreciate it if you could leave us a review on your podcast platform of choice and hit subscribe to get updates as we release new episodes. Leading the production for CLC are Casey Atkins, multimedia manager and Matty Tate-Smith, Senior Communications Manager for elections. This podcast was produced by Sam Eifling and Reaon Ford, edited and mixed by Luke Batiot. Democracy Decoded is a member of the Democracy Group, a network of podcasts dedicated to engaging in civil discourse, inspiring civic engagement, and exploring the future of our democracy. You can learn more at democracygroup.org.