When the voice on the other end of the phone isn't real: FCC bans robocalls made by AI
Betty Lin-Fisher, USA TODAY
Updated 5 min read
Phone calls made using artificial intelligence-generated voices are illegal after a unanimous vote Thursday by the Federal Communications Commission.
That will prohibit a growing number of calls, including one in January that used President Joe Biden's voice to encourage New Hampshire voters to skip the primary. The robocall was artificially generated and is being probed by the New Hampshire Attorney General's Office as an attempt at voter suppression.
The unanimous decision Thursday recognizes the calls made with AI-generated voices as "artificial" under the Telephone Consumer Protection Act, the agency said.
“Bad actors are using AI-generated voices in unsolicited robocalls to extort vulnerable family members, imitate celebrities, and misinform voters. We’re putting the fraudsters behind these robocalls on notice,” said FCC Chairwoman Jessica Rosenworcel in a press release. “State Attorneys General will now have new tools to crack down on these scams and ensure the public is protected from fraud and misinformation.”
What is happening with AI-generated calls?
The FCC said the rise of these types of calls has escalated during the last few years as technology has the potential to confuse consumers with misinformation by imitating the voices of celebrities, political candidates, and close family members. The action by the FCC makes the act itself of using AI to generate the voice in the robocalls illegal, "expanding the legal avenues through which state law enforcement agencies can hold these perpetrators accountable under the law," the agency said.
The initiators of such calls have also been accused of other crimes. New Hampshire Secretary of State David Scanlan said the fake Biden robocall was a form of voter suppression that cannot be tolerated, according to the Associated Press. New Hampshire Attorney General John Formella said Tuesday that investigators had identified the Texas-based Life Corp., and its owner, Walter Monk, as the source of the calls, which were made to thousands of New Hampshire residents. The state issued a cease-and-desist order and subpoena to Life Corp. and Texas-based Lingo Telecom, which Formella said transmitted the calls.
Lingo Telecom told the Associated Press in a statement that it had no involvement in the production of the call content. A man who answered the business line for LIfe Corp. declined to comment to the AP on Thursday.
Reaction to the decision
Consumer advocacy group Public Citizen praised the decision but said it didn't go far enough.
“Thank you, FCC, for today’s desperately needed rule outlawing AI voice-generated robocalls," said Robert Weissman, president of Public Citizen, in a statement. "This rule will meaningfully protect consumers from rapidly spreading AI scams and deception. Every agency should follow suit and apply the tools and laws at their disposal to regulate AI.
But in a follow-up email, Weissman said the terms of the underlying statute have limitations. Election-related and nonprofits can still make AI calls to landlines only.
Jonathan S. Uriarte, director of strategic communications for FCC chair Rosenworcel's office said AI-generated calls would still be allowed to landlines if they meet certain criteria. But the new Commission rules apply to any nonemergency call made using an auto-dialer or prerecorded or artificial voice, whether commercial or not if it is made to a wireless phone. Also banned would be those calls made to emergency lines or hospital/healthcare facility phones, he said.
The Commission's rules also provide guardrails for noncommercial calls or calls from nonprofits using artificial or prerecorded voice to residential lines by limiting them to no more than three calls within a consecutive 30-day period, Uriarte said. Callers must also honor opt-out requests for future calls.
The FCC ban is the first step, but Congress needs to step up to combat AI-generated fakes, said Rep. Yvette Clarke, D-N.Y.
“We all know how destructive robocalls can be, and this decision, as amazing as it is, won’t stop bad actors from trying to scam everyday Americans or eliminate their attempts to undermine our elections," Clarke said in a statement. "So, the next step is for Congress to act – and fast. I believe Democrats and Republicans can agree that AI-generated content used to deceive people is a bad thing, and we need to work together to help folks have the tools necessary to help discern what’s real and what isn’t.”
Two bills that would regulate the use of AI-generated content in political campaigns were introduced in Congress in 2023 but have languished. One, introduced by Clarke in May, would expand disclosure requirements for campaign ads to include if AI was used to generate videos or images. The other, introduced in September and led by Sen. Amy Klobuchar, D- Minn., would forbid the use of AI in political advertising.
The AI-fakes aren't just via phone calls. Earlier this week, an independent body that reviews Meta’s content moderation decisions urged the tech giant to overhaul its policy on manipulated videos to encompass fake or distorted clips that can mislead voters and tamper with elections.
The Oversight Board upheld a Meta decision last May to keep a doctored Biden video online but asked Meta to crack down on all doctored content in the future.
Betty Lin-Fisher is a consumer reporter for USA TODAY. Reach her at blinfisher@USATODAY.com or follow her on X, Facebook, or Instagram @blinfisher. Sign up for our free The Daily Money newsletter, which will include consumer news on Fridays, here.