Lawmakers passing the buck on non-consensual sexually explicit deepfakes are running out of time
AOC wants Washington to take this personally
Lessons from a politician’s experience
Imagine casually scrolling social media and seeing a video of yourself in a location you’ve never been, saying something you never said, engaged in an act you never did. Now, imagine also being naked in that video, which everyone else saw too.
That’s precisely what happened to Alexandria Ocasio-Cortez, who shared the experience with Rolling Stone back in April. AOC discovered her likeness used in a deepfake: an AI-generated video that grafted her face onto a body performing sexually explicit acts. The U.S. representative found it extremely traumatizing:
“And once you’ve seen it, you’ve seen it. It parallels the same exact intention of physical rape and sexual assault, [which] is about power, domination and humiliation. Deepfakes are absolutely a way of digitising violent humiliation against other people.”
How big is the problem now?
Deepfakes have become increasingly popular thanks to user-friendly AI generators and the proliferation of websites that host them. The vast majority are sexually explicit—and feature non-consenting women. According to a 2019 study from Deep Trace Labs, 96% of all deepfakes are sexually explicit and 99% of those videos target women.
The market is growing at a rapid clip. Currently, the deepfake market is worth $564 million and is estimated to swell to $5-billion by 2030 due to improved algorithms, easier software, and more convincing results. Many sexually explicit deepfakes are of celebrities with plenty of available source material. The phenomenon has spread to high schools, where teens have reported feeling extreme anxiety, depression and even suicidal ideation from having these videos of themselves in circulation.
What is being done about it? DEFIANCE Act
To combat this growing threat, AOC is working to pass a non-partisan bill called “Disrupt Explicit Forged Images and Non-Consensual Edits Act of 2024” (DEFIANCE Act) to make those who produce, direct viewers to and host this content legally accountable, and making it possible for victims to file lawsuits against them.
Presenting the DEFIANCE Act to American lawmakers on June 12, it was explained by Majority Whip Dick Durbin, the Democrat chair of the Senate Judiciary Committee:
“Once this bill is signed into law, victims finally will have the ability to hold civilly liable those who produce, disclose, solicit, or possess sexually-explicit deepfakes, while knowingly or recklessly disregarding that the person depicted did not consent to the conduct.”
Surprisingly, the bill was blocked by Senator Cynthia Lummis, a Republican who argued the overly broad language casts too wide a net of liability onto tech companies acting as third-party platforms that make the content accessible:
“This bill could lead to unintended consequences that stifle American technological innovation and development by extending liability to third party platforms that may unknowingly host this illicit content.
I worry this bill places an untenable burden on online services to constantly police user generated posts, even platforms.”
Many of the apps used to create deepfakes are sold by Apple and Google. Nearly half of all traffic comes from Google searches directing users to cloud websites hosted by Amazon and Microsoft, or to minimally moderated social media like X.
Despite the DEFIANCE Act having been expected to unanimously become law, there’s a tandem interest in protecting big tech from being held liable, whether or not they knowingly contribute to the prevalence of sexually explicit deepfakes.
Here’s my GRAIN of thought
The obstacles in getting the DEFIANCE Act passed is a sliver into the larger debate surrounding artificial intelligence: Who is liable for the content that is created online? While creator apps would be easy targets if they’re marketed as sexually explicit deepfake creators, it’s not so simple to prosecute the invisible infrastructures that enable them. As big tech companies get increasingly chummy with whichever political party ends up in power, and their lobbies grow more influential, there will be a continued tug-of-war between who’s responsible for regulating, and what exactly that regulation ought to look like. The effect of these squabbles can tie up the process for many years, while the proliferation of this technology races steadily forward—steamrolling over a growing number of victims.