The amount of AI-generated child sexual abuse material (CSAM) posted online is increasing, a report published Monday found.
The report, by the U.K.-based Internet Watch Foundation (IWF), highlights one of the darkest results of the proliferation of AI technology, which allows anyone with a computer and a little tech savvy to generate convincing deepfake videos. Deepfakes typically refer to misleading digital media created with artificial intelligence tools, like AI models and applications that allow users to “face-swap” a target’s face with one in a different video. Online, there is a subculture and marketplace that revolves around the creation of pornographic deepfakes.
In a 30-day review this spring of a dark web forum used to share CSAM, the IWF found a total of 3,512 CSAM images and videos created with artificial intelligence, most of them realistic. The number of CSAM images found in the review was a 17% increase from the number of images found in a similar review conducted in fall 2023.
The review of content also found that a higher percentage of material posted on the dark web is now depicting more extreme or explicit sex acts compared to six months ago.
“Realism is improving. Severity is improving. It’s a trend that we wouldn’t want to see,” said Dan Sexton, the IWF’s chief technology officer.
Entirely synthetic videos still look unrealistic, Sexton said, and are not yet popular on abusers’ dark web forums, though that technology is still rapidly improving.
“We’ve yet to see realistic-looking, fully synthetic video of child sexual abuse,” Sexton said. “If the technology improves elsewhere, in the mainstream, and that flows through to illegal use, the danger is we’re going to see fully synthetic content.”
It’s currently much more common for predators to take existing CSAM material depicting real people and use it to train low-rank adaptation models (LoRAs), specialized AI algorithms that make custom deepfakes from even a few still images or a short snippet of video.
The current reliance on old footage in creating new CSAM imagery can cause persistent harm to survivors, as it means footage of their abuse is repeatedly given fresh life.
“Some of these are victims that were abused decades ago. They’re grown-up survivors now,” Sexton said of the source material.
The rise in the deepfaked abuse material highlights the struggle regulators, tech companies and law enforcement face in preventing harm.
Last summer, seven of the largest AI companies in the U.S. signed a public pledge to abide by a handful of ethical and safety guidelines. But they have no control over the numerous smaller AI programs that have littered the internet, often free to use.
“The content that we’ve seen has been produced, as far as we can see, with openly available, free and open-source software and openly available models,” Sexton said.
A rise in deepfaked CSAM may make it harder to track pedophiles who are trading it, said David Finkelhor, the director of the University of New Hampshire’s Crimes Against Children Research Center.
A major tactic social media platforms and law enforcement use to identify abuse imagery is by automatically scanning new images to see if they match a database of established instances of CSAM. But newly deepfaked material may elide those sensors, Finkelhor said.
“Once these images have been altered, it becomes more difficult to block them,” he said. “It’s not entirely clear how courts are going to deal with this,” Finkelhor said.
The U.S. Justice Department has announced charges against at least one man accused of using artificial intelligence to create CSAM of minors. But the technology may also make it difficult to bring the strictest charges against CSAM traffickers, said Paul Bleakley, an assistant professor of criminal justice at the University of New Haven.
U.S. law is clear that possessing CSAM imagery, regardless of whether it was created or modified with AI, is illegal, Bleakley said. But there are harsher penalties reserved for people who create CSAM, and that might be harder to prosecute if it’s done with AI, he said.
“It is still a very gray area whether or not the person who is inputting the prompt is actually creating the CSAM,” Bleakley said.
In an emailed statement, the FBI said it takes crimes against children seriously and investigates each allegation with various law enforcement agencies.
“Malicious actors use content manipulation technologies and services to exploit photos and videos — typically captured from an individual’s social media account, open internet, or requested from the victim — into sexually-themed images that appear true-to-life in likeness to a victim, then circulate them on social media, public forums, or pornographic websites,” the bureau wrote. “Many victims, which have included minors, are unaware their images were copied, manipulated, and circulated until it was brought to their attention by someone else. The photos are then sent directly to the victims by malicious actors for sextortion or harassment, or until it was self-discovered on the internet.”
In its statement, the bureau urged victims to call their FBI field office or 1-800-CALL-FBI (225-5324).
If you think you or someone you know is a victim of child exploitation, you can contact the CyberTipline at 1-800-843-5678.
Read More