- Регистрация
- 17 Февраль 2018
- Сообщения
- 38 917
- Лучшие ответы
- 0
- Реакции
- 0
- Баллы
- 2 093
Offline
Settlement shows AI companies can face consequences for pirated training data.
Credit: Bespalyi | iStock / Getty Images Plus
Authors revealed today that Anthropic agreed to pay $1.5 billion and destroy all copies of the books the AI company pirated to train its artificial intelligence models.
In a press release provided to Ars, the authors confirmed that the settlement is "believed to be the largest publicly reported recovery in the history of US copyright litigation." Covering 500,000 works that Anthropic pirated for AI training, if a court approves the settlement, each author will receive $3,000 per work that Anthropic stole. "Depending on the number of claims submitted, the final figure per work could be higher," the press release noted.
Anthropic has already agreed to the settlement terms, but a court must approve them before the settlement is finalized. Preliminary approval may be granted this week, while the ultimate decision may be delayed until 2026, the press release noted.
Justin Nelson, a lawyer representing the three authors who initially sued to spark the class action—Andrea Bartz, Kirk Wallace Johnson, and Charles Graeber—confirmed that if the "first of its kind" settlement "in the AI era" is approved, the payouts will "far" surpass "any other known copyright recovery."
"It will provide meaningful compensation for each class work and sets a precedent requiring AI companies to pay copyright owners," Nelson said. "This settlement sends a powerful message to AI companies and creators alike that taking copyrighted works from these pirate websites is wrong."
Groups representing authors celebrated the settlement on Friday. The CEO of the Authors’ Guild, Mary Rasenberger, said it was "an excellent result for authors, publishers, and rightsholders generally." Perhaps most critically, the settlement shows "there are serious consequences when" companies "pirate authors’ works to train their AI, robbing those least able to afford it," Rasenberger said.
Maria Pallante, president and CEO of the Association of American Publishers, agreed the settlement was "beneficial" to stakeholders "beyond the monetary terms."
"The proposed settlement provides enormous value in sending the message that Artificial Intelligence companies cannot unlawfully acquire content from shadow libraries or other pirate sources as the building blocks for their models," Pallante said.
Notably, the settlement allows authors to retain rights and legal claims for any works not covered by the lawsuit. It also does not release any past or future claims over Anthropic's potentially infringing outputs.
In the coming weeks, if the settlement is preliminarily approved, authors will be able to search this website to confirm if their works were part of the class action and are therefore eligible for a payout. Any author seeking compensation will then be able to provide contact information to receive notifications as the settlement is finalized. In the meantime, the Authors Guild provided a thorough breakdown of how the settlement will work, including information for authors who are wondering if their works are included in the class.
Today, Anthropic likely breathes a sigh of relief to avoid the costs of extended litigation and potentially paying more for pirating books. However, the rest of the AI industry is likely horrified by the settlement, which advocates had suggested could set an alarming precedent that could financially ruin emerging AI companies like Anthropic.
Ars could not immediately reach Anthropic for comment. But Aparna Sridhar, Anthropic’s deputy general counsel, provided a statement to Ars, emphasizing that the court found "Anthropic’s approach to training AI models constitutes fair use."
“Today’s settlement, if approved, will resolve the plaintiffs’ remaining legacy claims," Sridhar said. "We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery and solve complex problems."


Credit: Bespalyi | iStock / Getty Images Plus
Authors revealed today that Anthropic agreed to pay $1.5 billion and destroy all copies of the books the AI company pirated to train its artificial intelligence models.
In a press release provided to Ars, the authors confirmed that the settlement is "believed to be the largest publicly reported recovery in the history of US copyright litigation." Covering 500,000 works that Anthropic pirated for AI training, if a court approves the settlement, each author will receive $3,000 per work that Anthropic stole. "Depending on the number of claims submitted, the final figure per work could be higher," the press release noted.
Anthropic has already agreed to the settlement terms, but a court must approve them before the settlement is finalized. Preliminary approval may be granted this week, while the ultimate decision may be delayed until 2026, the press release noted.
Justin Nelson, a lawyer representing the three authors who initially sued to spark the class action—Andrea Bartz, Kirk Wallace Johnson, and Charles Graeber—confirmed that if the "first of its kind" settlement "in the AI era" is approved, the payouts will "far" surpass "any other known copyright recovery."
"It will provide meaningful compensation for each class work and sets a precedent requiring AI companies to pay copyright owners," Nelson said. "This settlement sends a powerful message to AI companies and creators alike that taking copyrighted works from these pirate websites is wrong."
Groups representing authors celebrated the settlement on Friday. The CEO of the Authors’ Guild, Mary Rasenberger, said it was "an excellent result for authors, publishers, and rightsholders generally." Perhaps most critically, the settlement shows "there are serious consequences when" companies "pirate authors’ works to train their AI, robbing those least able to afford it," Rasenberger said.
Maria Pallante, president and CEO of the Association of American Publishers, agreed the settlement was "beneficial" to stakeholders "beyond the monetary terms."
"The proposed settlement provides enormous value in sending the message that Artificial Intelligence companies cannot unlawfully acquire content from shadow libraries or other pirate sources as the building blocks for their models," Pallante said.
Notably, the settlement allows authors to retain rights and legal claims for any works not covered by the lawsuit. It also does not release any past or future claims over Anthropic's potentially infringing outputs.
In the coming weeks, if the settlement is preliminarily approved, authors will be able to search this website to confirm if their works were part of the class action and are therefore eligible for a payout. Any author seeking compensation will then be able to provide contact information to receive notifications as the settlement is finalized. In the meantime, the Authors Guild provided a thorough breakdown of how the settlement will work, including information for authors who are wondering if their works are included in the class.
Today, Anthropic likely breathes a sigh of relief to avoid the costs of extended litigation and potentially paying more for pirating books. However, the rest of the AI industry is likely horrified by the settlement, which advocates had suggested could set an alarming precedent that could financially ruin emerging AI companies like Anthropic.
Ars could not immediately reach Anthropic for comment. But Aparna Sridhar, Anthropic’s deputy general counsel, provided a statement to Ars, emphasizing that the court found "Anthropic’s approach to training AI models constitutes fair use."
“Today’s settlement, if approved, will resolve the plaintiffs’ remaining legacy claims," Sridhar said. "We remain committed to developing safe AI systems that help people and organizations extend their capabilities, advance scientific discovery and solve complex problems."