Spurred by the growing threat of deepfakes, the FTC seeks Modify an existing rule prohibiting impersonation of companies or government agencies to cover all consumers.
The revised rule, depending on the final text and public comments the FTC receives, could also make it illegal for a GenAI platform to provide goods or services that it knows or has reason to know are being used to harm consumers through spoofing. of identity.
“Scammers are using artificial intelligence tools to impersonate people with disturbing accuracy and on a much broader scale,” FTC Chair Lina Khan said in a news release. “With the rise of voice cloning and other AI-powered scams, protecting Americans from copycat fraud is more critical than ever. “Our proposed expansions to the final impersonation rule would do just that: strengthen the FTC’s toolkit to address AI-enabled scams impersonating people.”
It's not just about people like Taylor Swift who have to worry about deepfakes. Online romance scams involving deepfakes are on the rise. And the scammers are impersonate employees to extract cash from corporations.
In a recent survey According to YouGov, 85% of Americans said they were very or somewhat concerned about the spread of misleading videos and audio. A seperation survey of The Associated Press-NORC Center for Public Affairs Research found that nearly 60% of adults believe artificial intelligence tools will increase the spread of false and misleading information during the 2024 US election cycle.
Last week, my colleague Devin Coldewey covered the FCC's decision to outlaw AI voice robocalls by reinterpreting an existing rule prohibiting artificial and prerecorded message spam. Timely in light of a phone campaign that employed a spoofed President Biden to discourage New Hampshire citizens from voting, the rule change (and today's action by the FTC) are the current extent of the federal government's fight against deepfakes and deepfaking technology.
No federal law completely prohibits deepfakes. High-profile victims, such as celebrities, can theoretically turn to more traditional legal remedies to defend themselves, including copyright law, image rights, and torts (e.g., invasion of privacy, intentional infliction of emotional distress). . But litigating against these fragmented laws can be time-consuming and laborious.
In the absence of congressional action, 10 states across the country have enacted statutes criminalizing deepfakes, albeit mostly non-consensual pornography. We will no doubt see those laws changed to cover a broader range of deepfakes (and more laws passed at the state level) as deepfake generation tools become increasingly sophisticated. (For example, Minnesota law already goals deepfakes used in political campaigns.)