X

Concerns about how AI will affect the 2024 election are growing

Concerns about how AI will affect the 2024 election are growing

As the 2024 primary election approaches, worries about how artificial intelligence (AI) might affect the results of the upcoming election are growing due to its rapid advancement.

Artificial intelligence (AI), a cutting-edge technology that can produce text, images, audio, and even deepfake videos, has the potential to spread misinformation in the already divisive political landscape and further erode public trust in the nation’s electoral system.

“2024 will be an AI election, much the way that 2016 or 2020 was a social media election,” said Ethan Bueno de Mesquita, interim dean at the University of Chicago Harris School of Public Policy. “We will all be learning as a society about the ways in which this is changing our politics.”

Concerns have been raised by experts that AI chatbots may provide voters with false information if they use them to look up ballots, calendars, or polling locations. More sinisterly, AI may be used to fabricate and spread false information against specific politicians or causes.

“I think it could get pretty dark,” said Lisa Bryant, chair of the Department of Political Science at California State University, Fresno and an expert with MIT’s Election lab.

According to polls, Americans are becoming more concerned about AI than just academics are. They seem to be concerned about how the technology may make things more complicated or confusing during the already divisive 2024 cycle.

Bipartisan majorities of American adults are concerned that artificial intelligence (AI) will “increase the spread of false information” in the 2024 election, according to a UChicago Harris/AP-NORC poll published in November.

According to a Morning Consult-Axios survey, the percentage of American adults who believe AI will have a negative effect on voters’ trust in candidate commercials and in election results in general has increased recently.

Almost 60% of respondents stated they believed AI-spread misinformation would influence the winner of the 2024 presidential contest.

“They are a very powerful tool for doing things like making fake videos, fake pictures, et cetera, that look extremely convincing and are extremely difficult to distinguish from reality — and that is going to be likely to be a tool in political campaigns, and already has been,” said Bueno de Mesquita, who worked on the UChicago poll.

“It’s very likely that that’s going to increase in the ‘24 election — that we’ll have fake content created by AI that’s at least by political campaigns, or at least by political action committees or other actors — that that will affect the voters’ information environment make it hard to know what’s true and false,” he said.

An AI-generated rendition of former President Trump’s voice was allegedly used in a television advertisement over the summer by the DeSantis-aligned super PAC Never Back Down.

The campaign of the former president released a video clip just before the third Republican presidential debate, in which the candidates introduced themselves using Trump’s favorite nicknames, seemingly mimicking the voices of their fellow Republicans.

Additionally, the Trump campaign published a modified version of a report that Garrett Haake of NBC News provided prior to the third GOP debate earlier this month. Haake’s report opens the clip unaltered, but then a voiceover criticizes the former president’s Republican opponents.

“The danger is there, and I think it’s almost unimaginable that we won’t have deepfake videos or whatever as part of our politics going forward,” Bueno de Mesquita said.

Politicians’ use of AI in particular has pushed tech companies and policymakers to think about regulating the technology.

Google declared earlier this year that verified election advertisers would have to “prominently disclose” the times when their advertisements were altered or created digitally.

When a political advertisement employs a “photorealistic image or video, or realistic-sounding audio” that was created or modified to, among other things, portray a real person saying or doing something they did not do, Meta also intends to mandate disclosure.

In October, President Biden signed an executive order on artificial intelligence that included plans for the Commerce Department to develop guidelines for content authentication and watermarking, as well as new safety standards.

“President Biden believes that we have an obligation to harness the power of AI for good, while protecting people from its potentially profound risks,” a senior administration official said at the time.

Legislators, however, have mainly been left scurrying to attempt to control the sector as it advances with new innovations.

As part of her campaign, Shamaine Daniels, a Democratic candidate for Congress from Pennsylvania, is using an AI-powered voice tool developed by startup Civox for phone banking.

“I share everyone’s grave concerns about the possible nefarious uses of AI in politics and elsewhere. But we need to also understand and embrace the opportunities this technology represents,” Daniels said when she announced her campaign would roll out the tech.

According to experts, artificial intelligence (AI) has potential applications in election cycles, such as assisting election officials in cleaning up voter lists to remove duplicate registrations and telling the public which political candidates they may support on certain issues.

However, they also caution that the technology may make issues that were discovered in the cycles of 2016 and 2020 worse.

According to Bryant, AI could enable misinformation to “micro-target” people even more precisely than social media currently does. Not even a person is immune from this, she said, citing the way advertisements on sites like Instagram already have the power to shape behavior.

“It really has helped to take this misinformation and really pinpoint what kinds of messages, based on past online behavior, really resonate and work with individuals,” she said.

Evidence suggests that social media targeting has not been successful in influencing elections, so Bueno de Mesquita said he is less concerned about micro-targeting from voter manipulation campaigns. According to him, resources ought to be directed toward informing the public about the “information environment” and directing them toward reliable sources of information.

The nonprofit watchdog group Protect Democracy, led by Nicole Schneidman, advocates for technology policy, stated that rather than anticipating AI to bring about “novel threats” for the 2024 election, the group sees potential acceleration of trends that are already compromising democracy and election integrity.

She warned against overstressing AI’s potential in the context of a larger misinformation campaign that could influence the outcome of the election.

“Certainly, the technology could be used in creative and novel ways, but what underlies those applications are all threats like disinformation campaigns or cyberattacks that we’ve seen before,” Schneidman said. “We should be focusing on mitigation strategies that we know that are responsive to those threats that are amplified, as opposed to spending too much time trying to anticipate every use case of the technology.”

Getting people in front of rapidly evolving technology may be a crucial first step toward overcoming it.

“The best way to become AI literate myself is to spend half an hour an hour playing with the chat bot,” said Bueno de Mesquita.

People who said they were more familiar with AI tools in the UChicago Harris/AP-NORC survey were also more likely to say that using the technology could increase the spread of false information, indicating that knowledge of the technology’s potential benefits can also increase awareness of its drawbacks.

“I think the good news is that we have strategies both old and new to really bring to the fore here,” Schneidman said.

Despite investments in those tools, she said detection technology might struggle to keep up with the increasing sophistication of AI. As an alternative, she claimed that “pre-bunking” by election officials can be useful in educating the public even before they may come across content created by artificial intelligence.

According to Schneidman, she hopes that election officials will also use digital signatures more frequently to let the public and media know which information is phony and which is coming from a reliable source. She said that in order to prepare for deepfakes, a candidate may also include these signatures in the images and videos they upload.

“Digital signatures are the proactive version of getting ahead of some of the challenges that synthetic content could pose to the caliber of the election information ecosystem,” she said.

She said that in order to prevent voter suppression and to ensure that people are not confused about when and how to vote, election officials, political leaders, and journalists can obtain the necessary information. She continued by saying that there is precedent for stories about election meddling, which benefits those combating false information generated by artificial intelligence.

“The benefits of pre-bunking include the ability to create powerful counter-messaging that foresees recurrent misinformation narratives and, ideally, get that in front of voters’ eyes well in advance of the election, ensuring that message is consistently landing with voters so that they are getting the authoritative information that they need,” stated Schneidman.

Categories: Technology
Komal:
X

Headline

You can control the ways in which we improve and personalize your experience. Please choose whether you wish to allow the following:

Privacy Settings

All rights received