X

An AI startup uses voice authentication to fight deepfake fraud

An AI startup uses voice authentication to fight deepfake fraud

With the advent of widely available generative AI, scammers are more likely to use deepfakes—voice imitations created with the intention of misleading—to trick victims into disclosing personal information.

Pindrop is a voice authentication startup that says it has the technology and know-how to authenticate users and change users’ content to AI-generated content. The CEO of Pindrop, Vijay Balasubramaniyan, testified before Congress on Wednesday regarding the dangers of deepfakes and the measures Congress should take to prevent them. Balasubramaniyan underlined the immediate negative effects of deepfakes, while many of the panelists concentrated on “superintelligence” or political disinformation.

The Washington Examiner was informed by Balasubramaniyan that “fundamentally, the point was that deepfakes break commerce because businesses can’t trust who’s on the other end.” Is it a machine or a person? Deepfakes distort information because it’s impossible to determine if something was said by Sen. [Chuck] Schumer or by Tom Hanks promoting dental plans. And then deepfakes destroy all contact because, as a grandmother, I have no idea who to trust if I don’t know if it’s my grandkids.”

Balasubramaniyan, Ahamad Mustaque, and Paul Judge founded Pindrop in 2015 with the goal of turning the CEO’s 2011 Ph.D. thesis into a workable product. At the Georgia Institute of Technology, Balasubramaniyan finished his graduate studies with an emphasis on finding characteristics in voice calls that could be used to confirm a user’s identity. The CEO found a number of auditory traits that can be used to create a voice “fingerprint” that can determine its authenticity. It can detect, for instance, whether a call is coming from a specific user’s phone or distinguish between aspects of the sound that are unique to the “shape of your entire vocal tract,” according to the CEO.

Although the startup is still in its early stages, the company said that it has partnerships with a number of large tech companies, such as Google Cloud and Amazon Web Services. Although Pindrop would not disclose its user count, it states that it collaborates with eight of the top ten banks and credit unions, fourteen of the top twenty insurers, and three of the top five brokerages in the US. According to a spokesperson who spoke with the Washington Examiner, Pindrop’s software has also examined over 5.3 billion calls and stopped $2.3 billion in fraud losses.

Although Pindrop is exclusively utilized by for-profit businesses, Balasubramaniyan claimed to have discussed the government’s potential use of its services with multiple Congressmen.

Because AI is a rapidly evolving field, it can be challenging for AI “fact-checkers” to stay current. According to Pindrop, it’s held up pretty well. Microsoft claimed that Pindrop could identify the software 99% of the time when it released the VALL-E large language model.

Fraud based on deepfakes has long been an issue. The Senate Judiciary Committee held a hearing on AI and Human Rights over the summer, during which the technology was discussed in front of Congress. In front of the committee, Jennifer Destefano recounted how con artists almost conned her out of $50,000 by using an artificial intelligence (AI) voice clone of her daughter.

Destefano told Congress, “It wasn’t just her voice, it was her cries, it was her sobs.” When her husband assured her that their daughter was safe and with him, the mother almost gave in to the call.

Deepfakes have been directed specifically at seniors. Sen. Mike Braun (R-IN) alerted the Federal Trade Commission in May about the use of chatbots and voice clones by con artists to deceive senior citizens into believing they are speaking with a close friend or relative. “In one case, a scammer used this approach to convince an older couple that the scammer was their grandson in desperate need of money to make bail, and the couple almost lost $9,400 before a bank official alerted them to the potential fraud,” the letter stated.

According to a survey conducted by the ID verification service Regula, 37% of worldwide businesses said that they have experienced attempts to access their websites using phony voices. Businesses have also reported being targeted by fake voices.

Categories: Technology
Komal:
X

Headline

You can control the ways in which we improve and personalize your experience. Please choose whether you wish to allow the following:

Privacy Settings

All rights received