Connect with us

Technology

Concerns about how AI will affect the 2024 election are growing

Published

on

Concerns about how AI will affect the 2024 election are growing

As the 2024 primary election approaches, worries about how artificial intelligence (AI) might affect the results of the upcoming election are growing due to its rapid advancement.

Artificial intelligence (AI), a cutting-edge technology that can produce text, images, audio, and even deepfake videos, has the potential to spread misinformation in the already divisive political landscape and further erode public trust in the nation’s electoral system.

“2024 will be an AI election, much the way that 2016 or 2020 was a social media election,” said Ethan Bueno de Mesquita, interim dean at the University of Chicago Harris School of Public Policy. “We will all be learning as a society about the ways in which this is changing our politics.”

Concerns have been raised by experts that AI chatbots may provide voters with false information if they use them to look up ballots, calendars, or polling locations. More sinisterly, AI may be used to fabricate and spread false information against specific politicians or causes.

“I think it could get pretty dark,” said Lisa Bryant, chair of the Department of Political Science at California State University, Fresno and an expert with MIT’s Election lab.

According to polls, Americans are becoming more concerned about AI than just academics are. They seem to be concerned about how the technology may make things more complicated or confusing during the already divisive 2024 cycle.

Bipartisan majorities of American adults are concerned that artificial intelligence (AI) will “increase the spread of false information” in the 2024 election, according to a UChicago Harris/AP-NORC poll published in November.

According to a Morning Consult-Axios survey, the percentage of American adults who believe AI will have a negative effect on voters’ trust in candidate commercials and in election results in general has increased recently.

Almost 60% of respondents stated they believed AI-spread misinformation would influence the winner of the 2024 presidential contest.

“They are a very powerful tool for doing things like making fake videos, fake pictures, et cetera, that look extremely convincing and are extremely difficult to distinguish from reality — and that is going to be likely to be a tool in political campaigns, and already has been,” said Bueno de Mesquita, who worked on the UChicago poll.

“It’s very likely that that’s going to increase in the ‘24 election — that we’ll have fake content created by AI that’s at least by political campaigns, or at least by political action committees or other actors — that that will affect the voters’ information environment make it hard to know what’s true and false,” he said.

An AI-generated rendition of former President Trump’s voice was allegedly used in a television advertisement over the summer by the DeSantis-aligned super PAC Never Back Down.

The campaign of the former president released a video clip just before the third Republican presidential debate, in which the candidates introduced themselves using Trump’s favorite nicknames, seemingly mimicking the voices of their fellow Republicans.

Additionally, the Trump campaign published a modified version of a report that Garrett Haake of NBC News provided prior to the third GOP debate earlier this month. Haake’s report opens the clip unaltered, but then a voiceover criticizes the former president’s Republican opponents.

“The danger is there, and I think it’s almost unimaginable that we won’t have deepfake videos or whatever as part of our politics going forward,” Bueno de Mesquita said.

Politicians’ use of AI in particular has pushed tech companies and policymakers to think about regulating the technology.

Google declared earlier this year that verified election advertisers would have to “prominently disclose” the times when their advertisements were altered or created digitally.

When a political advertisement employs a “photorealistic image or video, or realistic-sounding audio” that was created or modified to, among other things, portray a real person saying or doing something they did not do, Meta also intends to mandate disclosure.

In October, President Biden signed an executive order on artificial intelligence that included plans for the Commerce Department to develop guidelines for content authentication and watermarking, as well as new safety standards.

“President Biden believes that we have an obligation to harness the power of AI for good, while protecting people from its potentially profound risks,” a senior administration official said at the time.

Legislators, however, have mainly been left scurrying to attempt to control the sector as it advances with new innovations.

As part of her campaign, Shamaine Daniels, a Democratic candidate for Congress from Pennsylvania, is using an AI-powered voice tool developed by startup Civox for phone banking.

“I share everyone’s grave concerns about the possible nefarious uses of AI in politics and elsewhere. But we need to also understand and embrace the opportunities this technology represents,” Daniels said when she announced her campaign would roll out the tech.

According to experts, artificial intelligence (AI) has potential applications in election cycles, such as assisting election officials in cleaning up voter lists to remove duplicate registrations and telling the public which political candidates they may support on certain issues.

However, they also caution that the technology may make issues that were discovered in the cycles of 2016 and 2020 worse.

According to Bryant, AI could enable misinformation to “micro-target” people even more precisely than social media currently does. Not even a person is immune from this, she said, citing the way advertisements on sites like Instagram already have the power to shape behavior.

“It really has helped to take this misinformation and really pinpoint what kinds of messages, based on past online behavior, really resonate and work with individuals,” she said.

Evidence suggests that social media targeting has not been successful in influencing elections, so Bueno de Mesquita said he is less concerned about micro-targeting from voter manipulation campaigns. According to him, resources ought to be directed toward informing the public about the “information environment” and directing them toward reliable sources of information.

The nonprofit watchdog group Protect Democracy, led by Nicole Schneidman, advocates for technology policy, stated that rather than anticipating AI to bring about “novel threats” for the 2024 election, the group sees potential acceleration of trends that are already compromising democracy and election integrity.

She warned against overstressing AI’s potential in the context of a larger misinformation campaign that could influence the outcome of the election.

“Certainly, the technology could be used in creative and novel ways, but what underlies those applications are all threats like disinformation campaigns or cyberattacks that we’ve seen before,” Schneidman said. “We should be focusing on mitigation strategies that we know that are responsive to those threats that are amplified, as opposed to spending too much time trying to anticipate every use case of the technology.”

Getting people in front of rapidly evolving technology may be a crucial first step toward overcoming it.

“The best way to become AI literate myself is to spend half an hour an hour playing with the chat bot,” said Bueno de Mesquita.

People who said they were more familiar with AI tools in the UChicago Harris/AP-NORC survey were also more likely to say that using the technology could increase the spread of false information, indicating that knowledge of the technology’s potential benefits can also increase awareness of its drawbacks.

“I think the good news is that we have strategies both old and new to really bring to the fore here,” Schneidman said.

Despite investments in those tools, she said detection technology might struggle to keep up with the increasing sophistication of AI. As an alternative, she claimed that “pre-bunking” by election officials can be useful in educating the public even before they may come across content created by artificial intelligence.

According to Schneidman, she hopes that election officials will also use digital signatures more frequently to let the public and media know which information is phony and which is coming from a reliable source. She said that in order to prepare for deepfakes, a candidate may also include these signatures in the images and videos they upload.

“Digital signatures are the proactive version of getting ahead of some of the challenges that synthetic content could pose to the caliber of the election information ecosystem,” she said.

She said that in order to prevent voter suppression and to ensure that people are not confused about when and how to vote, election officials, political leaders, and journalists can obtain the necessary information. She continued by saying that there is precedent for stories about election meddling, which benefits those combating false information generated by artificial intelligence.

“The benefits of pre-bunking include the ability to create powerful counter-messaging that foresees recurrent misinformation narratives and, ideally, get that in front of voters’ eyes well in advance of the election, ensuring that message is consistently landing with voters so that they are getting the authoritative information that they need,” stated Schneidman.

Continue Reading
Advertisement

Technology

Clearcover Collaborates with Ada to Launch Customer-Facing AI Solution

Published

on

Along with Ada, the AI-native customer service automation startup, Clearcover, a next-generation auto insurance provider, announces the debut of a consumer-facing generative AI solution.

The customer advocate workflow for Clearcover is enhanced and streamlined by Ada’s “AI Agent”* for customer service automation.

With a conversational interface, the new solution is accessible to clients around-the-clock via Clearcover’s website and mobile app. It drastically cuts down on wait times and provides prompt, accurate, and considerate answers to even the most complicated questions.

More than 35% of Clearcover customer chat queries were automatically answered in the first month of the program’s debut for policyholders.

“Ada helps to make our clients’ expectations of the greatest digital customer experiences in the insurance industry a reality. “Ada’s technology combined with our API-first custom policy administration system powers next-level customer experience, lowers operating costs, and boosts overall efficiency,” stated Adam Fischer, Chief Product and Innovation Officer at Clearcover.

In a Hubspot analysis from 2023, 78% of customer care representatives stated that they feel AI allows them to focus more of their time on the most crucial aspects of their jobs.

“AI Agents”* for customer support, as opposed to chatbots, are made to reason intelligently through issues, pick up knowledge from encounters, and make judgments. Now, they are active instruments that solicit our feedback. Ada Chief Product and Technical Officer Mike Gozzo stated, “These intelligent agents are proactive partners who can comprehend our needs and assist us in making the best decisions.”

The solution several action-oriented features by directly integrating with Clearcover’s internal systems, knowledge bases, policies, and standards. This includes gathering pertinent data from the client to make elevating a query to a particular Clearcover employee more effective and accessing information from Clearcover’s exclusive Policy Administration System to address inquiries about policies and coverage.

Through performance reviews, human direction, and feedback, Ada’s “AI Agent”* for customer support is intended to develop and mature alongside Clearcover.

Through its Agent Portal, Clearcover’s insurance agent partners can also use the feature that allows them to quickly and intelligently respond to frequently asked questions by pulling up relevant information from the company’s knowledge base.

Two more proprietary generative AI solutions from Clearcover were unveiled last month. The first is a tool that helps adjusters analyze files and draft correspondence and their representatives by fully digitizing statement collection at first notice of loss (FNOL).

Continue Reading

Technology

LG Unveils the LGQNED AI and Next-Generation OLED evo AI TVs: Specs, Cost, and more

Published

on

LG Electronics has introduced the next generation of artificial intelligence (AI) televisions in the Indian market, with sizes ranging from 43 inches to 97 inches. The LG OLED 97G4 and LG QNED AI TVs are part of the new 2024 portfolio. The starting price of the new lineup for the Indian market is Rs 62,990.

Hong Ju Jeon, MD of LG Electronics India, made the following official statement: “With an advanced processor that enables outstanding audio-visual experiences across various screen sizes, the LG OLED evo AI and LG QNED AI TVs lineup takes the viewing experience to a new level.”

He continued, “We aim to further enhance our market leadership in Flat Panel TV in India with this new line-up.”

Furthermore, according to the manufacturer, the newest OLED AI TVs include exact pixel-level image analysis for backdrops, enhancing images with sharp objects and increased AI upscaling capabilities.

LG OLED AI TVs utilize artificial intelligence to produce a more vivid and crisp visual experience. They also provide real-time upscaling for sub-4K OTT video.

Furthermore, the 2024 QNED AI TVs from LG represent the next advancement in LCD technology, offering vivid and bright colors on the screen.

The LG QNED AI TV’s AI feature, according to the firm, enhances picture quality and richer, fuller audio with virtual 9.1.2 surround sound that surrounds you with a dome of sound for amazing immersion.

In an apparent strategic move to maintain its dominant position in the global flat-screen market, Samsung Display said in March that it had begun construction of its new 8.6-generation IT organic light-emitting diode (OLED) production line.

According to Yonhap news agency, the tech company plans to upgrade its current L8 series to the new A6 line for 8.6-generation OLED panels, which would target IT gadgets rather than smartphones at its facilities in the central region of Asan.

Continue Reading

Technology

OpenAI Integrates Google Drive with ChatGPT for Enhanced Functionality

Published

on

According to reports, OpenAI’s AI-powered chatbot, ChatGPT, is now supporting Google Drive integration. Several ChatGPT Enterprise users have reported that the chatbot can now link to Google Drive, according a 9To5Google post.

In a conversation, ChatGPT is reportedly informing users that linking apps to the platform is now possible. OneDrive and Google Drive accounts can be linked by users selecting the “Connect Apps” option from the file attachment menu. This option takes users to a second page. Sadly, only a select group of Enterprise users with premium subscriptions can now utilize this service.

The “Add from Google Drive” option appears in the file attachment menu of the chat box once the user has linked their Google Drive account to ChatGPT. The new choice opens to a file picker on the associated Google Drive, much like when you upload a file or document to ChatGPT, which opens to a local folder.

Although OpenAI hasn’t made an official announcement about the function, it appears that it has begun to roll it out to a small number of users, raising the possibility that it could soon be made available to everyone.

Google announced enhanced Gemini AI support for workspace apps, such Google Drive, during its yearly developers conference I/O. OpenAI’s ChatGPT platform consumers additional alternatives through its integration of Google Drive and Microsoft’s OneDrive; however, Google’s integration of Gemini extends much farther into its Workspace apps. With Gemini’s new side-panel, Google is making the app more accessible and enabling smooth navigation within the Workspace app ecosystem, which also includes Gmail, Docs, Sheets, Slides, and more.

Continue Reading

Trending

error: Content is protected !!