- Регистрация
- 17 Февраль 2018
- Сообщения
- 40 832
- Лучшие ответы
- 0
- Реакции
- 0
- Баллы
- 8 093
Offline
Faced with a ban in the United Kingdom, X pushes flawed fix to CSAM problem.
Credit: Apu Gomes / Stringer | Getty Images News
Once again, people are taking Grok at its word, treating the chatbot as a company spokesperson without questioning what it says.
On Friday morning, many outlets reported that X had blocked universal access to Grok’s image-editing features after the chatbot began prompting some users to pay $8 to use them. The messages are seemingly in response to reporting that people are using Grok to generate thousands of non-consensual sexualized images of women and children each hour.
“Image generation and editing are currently limited to paying subscribers,” Grok tells users, dropping a link and urging, “you can subscribe to unlock these features.”
However, as The Verge pointed out and Ars verified, unsubscribed X users can still use Grok to edit images. X seems to have limited users’ ability to request edits made by replying to Grok while still allowing image edits through the desktop site. App users can access the same feature by long-pressing on any image.
Using image-editing features without publicly prompting Grok keeps outputs out of the public feed. That means the only issue X has rushed to solve is stopping Grok from directly posting harmful images on the platform.
X declined to comment on whether it’s working to close those loopholes, but it has a history of pushing janky updates since Elon Musk took over the platform formerly known as Twitter. Still, motivated X users can also continue using the standalone Grok app or website to make abusive content for free.
Like images X users can edit without publicly asking Grok, these images aren’t posted publicly to an official X account but are likely to be shared by bad actors—some of whom, according to the BBC, are already promoting allegedly Grok-generated child sexual abuse materials (CSAM) on the dark web. That’s especially concerning since Wired reported this week that users of the Grok app and website are generating far more graphic and disturbing images than what X users are creating.
X risks fines if UK rejects supposed fix
It’s unclear how charging for Grok image editing will block controversial outputs, as Grok’s problematic safety guidelines remain intact. The chatbot is still instructed to assume that users have “good intent” when requesting images of “teenage” girls, which xAI says “does not necessarily imply underage.”
That could lead to Grok continuing to post harmful images of minors. xAI’s other priorities include Grok directives to avoid moralizing users and to place “no restrictions” on “fictional adult sexual content with dark or violent themes.” An AI safety expert told Ars that Grok could be tweaked to be safer, describing the chatbot’s safety guidelines as the kind of policy a platform would design if it “wanted to look safe while still allowing a lot under the hood.”
Updates to Grok’s X responses came after the platform risked fines and legal action from regulators around the world, including a potential ban in the United Kingdom.
X seems to hope that forcing users to share identification and credit card information as paying subscribers will make them less likely to use Grok to generate illegal content. But advocates who combat image-based sex abuse note that content like Grok’s “undressing” outputs can cause lasting psychological, financial, and reputational harm, even if the content is not illegal in some states.
That suggests that paying subscribers could continue using Grok to create harmful images that X may leave unchecked because they’re not technically illegal. In 2024, X agreed to voluntarily moderate all non-consensual intimate images, but Musk’s promotion of revealing bikini images of public and private figures suggests that’s no longer the case.
It seems likely that Grok will continue to be used to create non-consensual intimate images. So rather than solve the problem, X may at best succeed in limiting public exposure to Grok’s appalling outputs. The company may even profit from the feature, as Wired reported that Grok pushed “nudifying” or “undressing” apps into the mainstream.
So far, US regulators have been quiet about Grok’s outputs, with the Justice Department generally promising to take all forms of CSAM seriously. On Friday, Democratic senators started shifting those tides, demanding that Google and Apple remove X and Grok from app stores until it improves safeguards to block harmful outputs.
“There can be no mistake about X’s knowledge, and, at best, negligent response to these trends,” the senators wrote in a letter to Apple Chief Executive Officer Tim Cook and Google Chief Executive Officer Sundar Pichai.“Turning a blind eye to X’s egregious behavior would make a mockery of your moderation practices. Indeed, not taking action would undermine your claims in public and in court that your app stores offer a safer user experience than letting users download apps directly to their phones.”
A response to the letter is requested by January 23.
Whether the UK will accept X’s supposed solution is yet to be seen. If UK regulator Ofcom decides to move ahead with a probe into whether Musk’s chatbot violates the UK’s Online Safety Act, X could face a UK ban or fines of up to 10 percent of the company’s global turnover.
“It’s unlawful,” UK Prime Minister Keir Starmer said of Grok’s worst outputs. “We’re not going to tolerate it. I’ve asked for all options to be on the table. It’s disgusting. X need to get their act together and get this material down. We will take action on this because it’s simply not tolerable.”
At least one UK parliament member, Jess Asato, told The Guardian that even if X had put up an actual paywall, that isn’t enough to end the scrutiny.
“While it is a step forward to have removed the universal access to Grok’s disgusting nudifying features, this still means paying users can take images of women without their consent to sexualise and brutalise them,” Asato said. “Paying to put semen, bullet holes, or bikinis on women is still digital sexual assault, and xAI should disable the feature for good.”
Credit: Apu Gomes / Stringer | Getty Images News
Once again, people are taking Grok at its word, treating the chatbot as a company spokesperson without questioning what it says.
On Friday morning, many outlets reported that X had blocked universal access to Grok’s image-editing features after the chatbot began prompting some users to pay $8 to use them. The messages are seemingly in response to reporting that people are using Grok to generate thousands of non-consensual sexualized images of women and children each hour.
“Image generation and editing are currently limited to paying subscribers,” Grok tells users, dropping a link and urging, “you can subscribe to unlock these features.”
However, as The Verge pointed out and Ars verified, unsubscribed X users can still use Grok to edit images. X seems to have limited users’ ability to request edits made by replying to Grok while still allowing image edits through the desktop site. App users can access the same feature by long-pressing on any image.
Using image-editing features without publicly prompting Grok keeps outputs out of the public feed. That means the only issue X has rushed to solve is stopping Grok from directly posting harmful images on the platform.
X declined to comment on whether it’s working to close those loopholes, but it has a history of pushing janky updates since Elon Musk took over the platform formerly known as Twitter. Still, motivated X users can also continue using the standalone Grok app or website to make abusive content for free.
Like images X users can edit without publicly asking Grok, these images aren’t posted publicly to an official X account but are likely to be shared by bad actors—some of whom, according to the BBC, are already promoting allegedly Grok-generated child sexual abuse materials (CSAM) on the dark web. That’s especially concerning since Wired reported this week that users of the Grok app and website are generating far more graphic and disturbing images than what X users are creating.
X risks fines if UK rejects supposed fix
It’s unclear how charging for Grok image editing will block controversial outputs, as Grok’s problematic safety guidelines remain intact. The chatbot is still instructed to assume that users have “good intent” when requesting images of “teenage” girls, which xAI says “does not necessarily imply underage.”
That could lead to Grok continuing to post harmful images of minors. xAI’s other priorities include Grok directives to avoid moralizing users and to place “no restrictions” on “fictional adult sexual content with dark or violent themes.” An AI safety expert told Ars that Grok could be tweaked to be safer, describing the chatbot’s safety guidelines as the kind of policy a platform would design if it “wanted to look safe while still allowing a lot under the hood.”
Updates to Grok’s X responses came after the platform risked fines and legal action from regulators around the world, including a potential ban in the United Kingdom.
X seems to hope that forcing users to share identification and credit card information as paying subscribers will make them less likely to use Grok to generate illegal content. But advocates who combat image-based sex abuse note that content like Grok’s “undressing” outputs can cause lasting psychological, financial, and reputational harm, even if the content is not illegal in some states.
That suggests that paying subscribers could continue using Grok to create harmful images that X may leave unchecked because they’re not technically illegal. In 2024, X agreed to voluntarily moderate all non-consensual intimate images, but Musk’s promotion of revealing bikini images of public and private figures suggests that’s no longer the case.
It seems likely that Grok will continue to be used to create non-consensual intimate images. So rather than solve the problem, X may at best succeed in limiting public exposure to Grok’s appalling outputs. The company may even profit from the feature, as Wired reported that Grok pushed “nudifying” or “undressing” apps into the mainstream.
So far, US regulators have been quiet about Grok’s outputs, with the Justice Department generally promising to take all forms of CSAM seriously. On Friday, Democratic senators started shifting those tides, demanding that Google and Apple remove X and Grok from app stores until it improves safeguards to block harmful outputs.
“There can be no mistake about X’s knowledge, and, at best, negligent response to these trends,” the senators wrote in a letter to Apple Chief Executive Officer Tim Cook and Google Chief Executive Officer Sundar Pichai.“Turning a blind eye to X’s egregious behavior would make a mockery of your moderation practices. Indeed, not taking action would undermine your claims in public and in court that your app stores offer a safer user experience than letting users download apps directly to their phones.”
A response to the letter is requested by January 23.
Whether the UK will accept X’s supposed solution is yet to be seen. If UK regulator Ofcom decides to move ahead with a probe into whether Musk’s chatbot violates the UK’s Online Safety Act, X could face a UK ban or fines of up to 10 percent of the company’s global turnover.
“It’s unlawful,” UK Prime Minister Keir Starmer said of Grok’s worst outputs. “We’re not going to tolerate it. I’ve asked for all options to be on the table. It’s disgusting. X need to get their act together and get this material down. We will take action on this because it’s simply not tolerable.”
At least one UK parliament member, Jess Asato, told The Guardian that even if X had put up an actual paywall, that isn’t enough to end the scrutiny.
“While it is a step forward to have removed the universal access to Grok’s disgusting nudifying features, this still means paying users can take images of women without their consent to sexualise and brutalise them,” Asato said. “Paying to put semen, bullet holes, or bikinis on women is still digital sexual assault, and xAI should disable the feature for good.”