
Meta has filed a motion to dismiss a lawsuit that claims the tech giant violated copyright laws by downloading thousands of adult films to use in AI training.
The lawsuit was brought forth by adult film company Strike 3 Holdings and Counterlife Media in July, stating that they had discovered nearly 3,000 instances of Meta downloading copyrighted videos using hidden IP addresses. The lawsuit alleged the downloads were then used to train Meta’s Movie Gen, Llama, and other video-based AI models. Strike 3 asked for $359 million in damages and a permanent ban on Meta’s use of its materials.
Meta has now responded, arguing that the allegations by Strike 3 Holdings are “nonsensical and unsupported” — an attempt to extort Meta by making erroneous copyright claims — and that the company has no proof the videos were used to train AI.
Insisting it was unaware of any illegal downloads, the company says that the torrenting began in 2018, before the company started researching multimodal models and generative video. The flagged videos, which were only intermittently accessed, therefore must have been downloaded for private consumption, Meta asserts, not to train its systems at large.
In the motion, Meta argues Strike 3 failed to provide any evidence of its claims that individuals used hidden IP addresses to download material or that employees involved in Meta’s AI project could be implicated. “The far more plausible inference to be drawn from such meager, uncoordinated activity is that disparate individuals downloaded adult videos for personal use,” wrote Meta. The company has requested that the court dismiss the lawsuit on such grounds.
Meta has weathered several copyright-based lawsuits related to its AI systems in the past, including a joint lawsuit by noted authors whose works had been pirated to train the Llama model. It also recently overhauled its chatbot policies for teens, following an investigation that found chatbots had been allowed to engage in romantic and sensual conversations with young users and generate sexually suggestive images.
“We don’t want this type of content, and we take deliberate steps to avoid training on this kind of material,” a Meta spokesperson told Ars Technica.