Connect with us

Technology

Concerns about how AI will affect the 2024 election are growing

Published

on

Concerns about how AI will affect the 2024 election are growing

As the 2024 primary election approaches, worries about how artificial intelligence (AI) might affect the results of the upcoming election are growing due to its rapid advancement.

Artificial intelligence (AI), a cutting-edge technology that can produce text, images, audio, and even deepfake videos, has the potential to spread misinformation in the already divisive political landscape and further erode public trust in the nation’s electoral system.

“2024 will be an AI election, much the way that 2016 or 2020 was a social media election,” said Ethan Bueno de Mesquita, interim dean at the University of Chicago Harris School of Public Policy. “We will all be learning as a society about the ways in which this is changing our politics.”

Concerns have been raised by experts that AI chatbots may provide voters with false information if they use them to look up ballots, calendars, or polling locations. More sinisterly, AI may be used to fabricate and spread false information against specific politicians or causes.

“I think it could get pretty dark,” said Lisa Bryant, chair of the Department of Political Science at California State University, Fresno and an expert with MIT’s Election lab.

According to polls, Americans are becoming more concerned about AI than just academics are. They seem to be concerned about how the technology may make things more complicated or confusing during the already divisive 2024 cycle.

Bipartisan majorities of American adults are concerned that artificial intelligence (AI) will “increase the spread of false information” in the 2024 election, according to a UChicago Harris/AP-NORC poll published in November.

According to a Morning Consult-Axios survey, the percentage of American adults who believe AI will have a negative effect on voters’ trust in candidate commercials and in election results in general has increased recently.

Almost 60% of respondents stated they believed AI-spread misinformation would influence the winner of the 2024 presidential contest.

“They are a very powerful tool for doing things like making fake videos, fake pictures, et cetera, that look extremely convincing and are extremely difficult to distinguish from reality — and that is going to be likely to be a tool in political campaigns, and already has been,” said Bueno de Mesquita, who worked on the UChicago poll.

“It’s very likely that that’s going to increase in the ‘24 election — that we’ll have fake content created by AI that’s at least by political campaigns, or at least by political action committees or other actors — that that will affect the voters’ information environment make it hard to know what’s true and false,” he said.

An AI-generated rendition of former President Trump’s voice was allegedly used in a television advertisement over the summer by the DeSantis-aligned super PAC Never Back Down.

The campaign of the former president released a video clip just before the third Republican presidential debate, in which the candidates introduced themselves using Trump’s favorite nicknames, seemingly mimicking the voices of their fellow Republicans.

Additionally, the Trump campaign published a modified version of a report that Garrett Haake of NBC News provided prior to the third GOP debate earlier this month. Haake’s report opens the clip unaltered, but then a voiceover criticizes the former president’s Republican opponents.

“The danger is there, and I think it’s almost unimaginable that we won’t have deepfake videos or whatever as part of our politics going forward,” Bueno de Mesquita said.

Politicians’ use of AI in particular has pushed tech companies and policymakers to think about regulating the technology.

Google declared earlier this year that verified election advertisers would have to “prominently disclose” the times when their advertisements were altered or created digitally.

When a political advertisement employs a “photorealistic image or video, or realistic-sounding audio” that was created or modified to, among other things, portray a real person saying or doing something they did not do, Meta also intends to mandate disclosure.

In October, President Biden signed an executive order on artificial intelligence that included plans for the Commerce Department to develop guidelines for content authentication and watermarking, as well as new safety standards.

“President Biden believes that we have an obligation to harness the power of AI for good, while protecting people from its potentially profound risks,” a senior administration official said at the time.

Legislators, however, have mainly been left scurrying to attempt to control the sector as it advances with new innovations.

As part of her campaign, Shamaine Daniels, a Democratic candidate for Congress from Pennsylvania, is using an AI-powered voice tool developed by startup Civox for phone banking.

“I share everyone’s grave concerns about the possible nefarious uses of AI in politics and elsewhere. But we need to also understand and embrace the opportunities this technology represents,” Daniels said when she announced her campaign would roll out the tech.

According to experts, artificial intelligence (AI) has potential applications in election cycles, such as assisting election officials in cleaning up voter lists to remove duplicate registrations and telling the public which political candidates they may support on certain issues.

However, they also caution that the technology may make issues that were discovered in the cycles of 2016 and 2020 worse.

According to Bryant, AI could enable misinformation to “micro-target” people even more precisely than social media currently does. Not even a person is immune from this, she said, citing the way advertisements on sites like Instagram already have the power to shape behavior.

“It really has helped to take this misinformation and really pinpoint what kinds of messages, based on past online behavior, really resonate and work with individuals,” she said.

Evidence suggests that social media targeting has not been successful in influencing elections, so Bueno de Mesquita said he is less concerned about micro-targeting from voter manipulation campaigns. According to him, resources ought to be directed toward informing the public about the “information environment” and directing them toward reliable sources of information.

The nonprofit watchdog group Protect Democracy, led by Nicole Schneidman, advocates for technology policy, stated that rather than anticipating AI to bring about “novel threats” for the 2024 election, the group sees potential acceleration of trends that are already compromising democracy and election integrity.

She warned against overstressing AI’s potential in the context of a larger misinformation campaign that could influence the outcome of the election.

“Certainly, the technology could be used in creative and novel ways, but what underlies those applications are all threats like disinformation campaigns or cyberattacks that we’ve seen before,” Schneidman said. “We should be focusing on mitigation strategies that we know that are responsive to those threats that are amplified, as opposed to spending too much time trying to anticipate every use case of the technology.”

Getting people in front of rapidly evolving technology may be a crucial first step toward overcoming it.

“The best way to become AI literate myself is to spend half an hour an hour playing with the chat bot,” said Bueno de Mesquita.

People who said they were more familiar with AI tools in the UChicago Harris/AP-NORC survey were also more likely to say that using the technology could increase the spread of false information, indicating that knowledge of the technology’s potential benefits can also increase awareness of its drawbacks.

“I think the good news is that we have strategies both old and new to really bring to the fore here,” Schneidman said.

Despite investments in those tools, she said detection technology might struggle to keep up with the increasing sophistication of AI. As an alternative, she claimed that “pre-bunking” by election officials can be useful in educating the public even before they may come across content created by artificial intelligence.

According to Schneidman, she hopes that election officials will also use digital signatures more frequently to let the public and media know which information is phony and which is coming from a reliable source. She said that in order to prepare for deepfakes, a candidate may also include these signatures in the images and videos they upload.

“Digital signatures are the proactive version of getting ahead of some of the challenges that synthetic content could pose to the caliber of the election information ecosystem,” she said.

She said that in order to prevent voter suppression and to ensure that people are not confused about when and how to vote, election officials, political leaders, and journalists can obtain the necessary information. She continued by saying that there is precedent for stories about election meddling, which benefits those combating false information generated by artificial intelligence.

“The benefits of pre-bunking include the ability to create powerful counter-messaging that foresees recurrent misinformation narratives and, ideally, get that in front of voters’ eyes well in advance of the election, ensuring that message is consistently landing with voters so that they are getting the authoritative information that they need,” stated Schneidman.

Continue Reading
Advertisement

Technology

The Debut of Clever.AI was Revealed by CleverTap

Published

on

Clever.AI, the AI engine of CleverTap, one of the top all-in-one platforms for customer engagement and retention, was launched today. Through Clever.AI, CleverTap aims to provide brands with the next generation of AI capabilities needed to develop a human-like understanding of their customers and effectively deliver personalized experiences that increase customer lifetime value.

Brilliant.Predictive, generative, and prescriptive AI are the three main pillars upon which AI is based. Brilliant.These three pillars work together to revolutionize consumer engagement strategies and create more intelligent and effective customer interactions thanks to artificial intelligence (AI).

Clever.AI Gives Brands the Ability to Become:

Perceptive: Equipped with Predictive AI powers, it predicts exact business results, assisting brands in anticipating consumer demands. Astute.The TesseractDBTM, a proprietary technology from CleverTap, powers AI insights by ensuring data granularity over an extended lookback period, improving prediction accuracy, and empowering brands to make well-informed decisions that boost marketing ROI.

Empathetic: Cleverly advancing GenAI.AI creates content that speaks to people on a human level by fusing creativity and emotional intelligence. By using empathy, brands can increase conversion rates and provide hyper-personalized experiences for customers.

Actionable: By utilizing Prescriptive AI capabilities, it helps brands instantly determine the best engagement strategies to maximize conversions throughout the customer journey.

Burger King’s Digital Product Manager, Peter Takacs, gave it a 10 for usability and a wide range of potential applications. “Our marketing campaigns were improved by our ability to quickly and easily experiment with different options before settling on the best one.” It ushers in a new age of ongoing experimentation.

Chief Product Officer and co-founder of CleverTap Anand Jain stated, “We’re excited to introduce Clever.AI is proof of our commitment over the past few years to setting the standard for early adoption of cutting-edge technology to revolutionize customer interaction. CleverTap’s All-in-One engagement platform will continue to be innovated by Clever.As a result of deeper persona profiling and advanced product analytics, AI is improving its predictive precision and strengthening its capacity to recommend intelligent customer experiences. This enables brands to create more successful campaigns that are outcome-driven and highly personalized for each and every customer interaction.

Brands have already seen an increase in conversion with noticeably greater operational efficiency thanks to Clever.AI. They saw a 3x improvement in click-through rates (CTRs), a 36% increase in conversion rates, and a 35% increase in operational efficiency. They also saw an increase in other metrics like purchases and average order values (AOVs). Additionally, by streamlining content creation, experimentation at scale, and campaign roll-outs, Clever.AI improved operational efficiency. Prominent companies like TouchnGo, Swiggy, and Burger King have benefited from the efficiency gains made by Clever.AI in their campaigns.

At its Spring Release ’24 event, which takes place from May 6–9, CleverTap will present its new AI capabilities through a series of stimulating sessions on how AI can improve the intelligence, effectiveness, and engagement of campaigns for brands.

Continue Reading

Technology

Oracle Introduces Database 23ai, Adding Artificial Intelligence to Enterprise Data

Published

on

Oracle has released Oracle Database 23ai, a new database technology that incorporates artificial intelligence. The release, which is now as a suite of cloud services, is concentrated on optimizing application development, supporting crucial workloads, and simplifying the use of AI.

One of its primary features, Oracle AI Vector Search, simplifies data search by letting users look up documents, photos, and relational data using conceptual content rather than precise keywords or data values.

AI Vector Search removes the need to transfer or duplicate data in order to process AI by enabling natural language queries on confidential business information stored in Oracle databases. The integration of AI in real-time with databases improves operational effectiveness, security, and efficiency.

Oracle Database 23ai is accessible via Oracle Cloud Infrastructure (OCI) on Oracle Database@Azure, Oracle Exadata Database Service, Oracle Exadata Cloud@Customer, and Oracle Base Database Service.

Oracle’s Executive Vice President of Mission-Critical Database Technologies, Juan Loaiza, emphasized the importance of Oracle Database 23ai and called it a revolutionary tool for multinational corporations.

“Building intelligent apps, increasing developer productivity, and managing mission-critical workloads is made simple for developers and data professionals by AI Vector Search in conjunction with new unified development paradigms and mission-critical capabilities,” the speaker stated.

Three major improvements have been made to Oracle Database 23ai: OCI GoldenGate 23ai for real-time data replication across heterogeneous stores, AI Vector Search for semantic search, and Oracle Exadata System Software 24ai for accelerated AI processing. By utilizing JSON and graph data models, mission-critical data security, and availability are guaranteed, and developers are empowered to create intelligent apps.

Customers may anticipate higher data security, more rapid enterprise application innovation, and increased operational efficiency with Oracle’s ongoing developments in AI-integrated databases. A strong foundation for companies embracing AI technologies is promised by Oracle Database 23ai, which marks a substantial advancement in AI-driven database systems.

Continue Reading

Technology

Google Introduces Gemini AI on Android Devices for Singapore Users

Published

on

Singapore is among the main beneficiaries of Google’s Gemini Mobile App, which enhances the AI capabilities of Android-based smartphones. With Gemini AI now supporting more languages and regions, this rollout is a part of Google’s larger strategy to make its advanced AI available to a global audience.

The Gemini app is now available for direct download or Google Assistant access for Android users in Singapore. The app works with Android phones running Android 12 or later and having at least 4 GB of RAM. On iOS devices running iOS 16 or later, users can interact with Gemini through a dedicated tab in the Google app.

With Gemini AI’s flexible and intuitive design, users can get help by speaking, typing, or uploading an image. To illustrate Google’s goal of developing a truly conversational and multimodal AI assistant, you could, for example, take a picture of a flat tire and receive detailed instructions on how to fix it, or ask for assistance writing a thank-you note.

Google is incorporating Gemini more thoroughly into its ecosystem in addition to the stand-alone app. With the help of new extensions, the AI can now effortlessly search through a wide range of Google services, including YouTube, Gmail, Docs, Drive, Maps, and even Google Flights and Hotels, to offer thorough support. Gemini’s ability to combine travel dates, lodging, and activities into a single itinerary based on user emails and preferences makes it an especially helpful tool for complicated tasks like organizing travel plans.

Additionally, Google is making using Gemini on desktops easier. By typing “@gemini” after their question, users can start direct inquiries from the address bar of the Chrome browser. This results in a rapid launch of the gemini.google.com page, which further integrates Gemini’s AI capabilities across platforms and shows answers right away.

Google’s latest developments improve the daily digital experience for users in Singapore and possibly globally, while also advocating for increased accessibility to AI tools.

Continue Reading

Trending

error: Content is protected !!