Tech

Senator Asks Big Banks How They’re Going to Stop AI Cloned Voices From Breaking Into Accounts

AI voice

The chairman of the Senate committee that provides oversight of the banking sector has sent letters to the CEOs of the country’s biggest banks asking what they plan to do about the looming threat of fake voices created with artificial intelligence being used to break into customers’ accounts.

The move comes after Motherboard used an AI-powered system to clone a reporter’s voice, and then used that to fool a bank’s voice authentication security system. That investigation showed that just a few minutes of a target’s voice audio was enough to generate a clone that was convincing enough to break into a bank account, potentially putting the public at risk of such attacks, and especially those with a public presence such as politicians, journalists, podcast hosts, streamers, and more.

Videos by VICE

“In recent years, financial institutions have promoted voice authentication as a secure tool that makes customer authentication faster and safer. Customers have used voice authentication tools to gain access to their accounts. According to news reports, however, voice authentication may not be foolproof, and it highlights several concerns,” Senator Sherrod Brown, chairman of the U.S. Senate Committee on Banking, Housing, and Urban Affairs, wrote in the letters.

Brown sent the letters to the CEOs of JP Morgan Chase & Co., Bank of America, Wells Fargo, Morgan Stanley, Charles Schwab, and TD Bank.

“We seek to better understand what measures financial institutions are taking to ensure the security of the voice authentication tools and the steps they are taking to ensure strong data privacy for voice data. Like a fingerprint, face id, or retinal scan, voice data is among the most intimate types of data that can be collected about a person. Consumers deserve to understand how their voice data is being collected, stored, used, and retained,” Brown continues.

The letter points specifically to Motherboard’s earlier investigation. For that February article, Motherboard used a voice cloning service from an AI startup called ElevenLabs. At the time of the test, Motherboard was able to generate the voice for free. Motherboard uploaded about five minutes of audio to the service, which then provided the ready-to-use synthetic voice a short while later.

Do you know anything else about bank voice ID, or how AI voices are being abused? We’d love to hear from you. Using a non-work phone or computer, you can contact Joseph Cox securely on Signal on +44 20 8133 5190, Wickr on josephcox, or email joseph.cox@vice.com.

ElevenLabs has already been tied to multiple cases of real world abuse. Members of 4chan used the service to make synthetic versions of celebrities’ voices, including one that sounded like Emma Watson which the users made read Mein Kampf. A group of trolls then doxed specific voice actors and used synthetic voices as part of the harassment campaign (the attackers claimed ElevenLabs’ tool was used, but ElevenLabs told Motherboard at the time that only one clip, which did not include the targets’ addresses, was made with its software).

Motherboard tested the cloned voice on the authentication system of Lloyds Bank in the UK. Many banks in the U.S. use similar systems, such as TD Bank’s “VoicePrint” and Chase’s “Voice ID.” At the time, TD Bank, Chase, and Wells Fargo did not respond to a request for comment. In September, lawyers filed suit against a group of U.S. financial institutions because they believe biometric voice prints used to identify callers violates California law.

In his letter to the banks, Brown asks each to describe their use of voice authentication services, including whether they are using third-party provided tools; how frequently customers use voice authentication; how the banks respond to breaches due to flaws in voice authentication; and where customer voice data is stored. Brown gave the banks until May 18 to respond.

As for the broader threat AI voice cloning poses to the public, Brown adds “Worryingly, the prevalence of video clips publicly available on Instagram, TikTok, and YouTube have made it easier than ever for bad actors to replicate the voices of other people.”

Subscribe to our cybersecurity podcast, CYBER. Subscribe to our Twitch channel.