With the US presidential race squarely underway, and the recent controversy surrounding deepfake nude videos of Taylor Swift circulating online, it comes as no surprise that Meta is getting serious about AI-generated content and is calling for an industry-wide effort to detect such content.
According to the New York Times, Nick Clegg, Meta's president of global affairs told the crowd at Davos last month that such efforts are "“the most urgent task” facing tech, and promoted a set of standards for such detection developed by Partnership On AI, a non-profit group funded by Meta and OpenAI among others.
Meta's worries are not abstract. Earlier this week, the FCC issued a cease-and-desist order to Lingo Telecom, a Texas-based company, that was making robocalls using an AI-generated Joe Biden voice to voters in New Hampshire ahead of that state's primary. “The FCC’s partnership and fast action in this matter sends a clear message that law enforcement and regulatory agencies are staying vigilant and are working closely together to monitor and investigate any signs of AI being used maliciously to threaten our democratic process,” New Hampshire's attorney general said in a statement, reports Politico.
Meta is not alone in its push to detect and identify AI-generated content. In September, Alphabet introduced a policy requiring users who post AI-altered voices or images to disclose such information (a policy that Meta enforces across its platforms as well). These policies came after the Republican National Committee released an entire AI-generated attack ad in April showing the future if Joe Biden is reelected president, notes PBS.
"Since at least July 2023, Russia-affiliated actors have utilized innovative methods to engage audiences in Russia and the west with inauthentic, but increasingly sophisticated, multimedia content," Microsoft stated in a November 2023 report. " As the election cycle progresses, we expect these actors’ tradecraft will improve while the underlying technology becomes more capable."
Deepfake Harassment
Of course, deepfakes are not just a political problem. As Taylor Swift's recent experience highlights, deepfakes are increasingly used for harassment—especially of women. "In cases where deepfakes are used for harassment or stalking, existing laws in this domain might provide legal recourse for victims," writes Reuters. "Criminal laws are also inadequate as they do not explicitly cover the creation or distribution of deepfakes, other than to the extent they would fall under the traditional categories of cybercrimes or sexual offenses." To address such inadequacies in the law, Congress introduced the bipartisan Disrupt Explicit Forged Images and Non-Consensual Edits, or DEFIANCE Act, that allows victims to sue the creators of such content.
THE VERDICT:
No doubt AI has some deeply troubling potentials that need to be monitored and regulated. That the tech industry (especially its giants) are waking up to this threat and coming together to tackle it is promising. That being said, the pace has been painfully slow and comes amid government scrutiny over past enforcement from platforms like Facebook, Instagram, and TikTok.
Be a smarter legal leader
Join 7,000+ subscribers getting the 4-minute monthly newsletter with fresh takes on the legal news and industry trends that matter.