Ticker

10/recent/ticker-posts

UK Slams X’s Paid Grok Image Tool as “Insulting” Amid Deepfake Abuse Concerns


The UK government has strongly criticized X (formerly Twitter) for making its Grok AI image generation tool only available to paying subscribers. They called the move "insulting" to victims of sexual violence and misogyny.

The condemnation comes at a time when there is growing concern about the misuse of generative AI tools to make sexualized deepfake images, which mostly affect women and girls. Officials in the UK say that the paywall doesn't fix the main problem; it just makes money off of technology that has already been linked to abuse.

A government spokesperson said, "This change just turns an AI feature that lets people make illegal images into a premium service." This shows that people are worried that the platform is putting money ahead of safety.

The criticism comes after reports that the UK was thinking about banning X completely if it didn't do enough to stop the making and spreading of sexualized deepfakes. Lawmakers have said that platforms that allow this kind of content could be breaking the country's Online Safety Act, which says that tech companies must actively stop illegal and harmful content.

Campaigners and digital safety advocates agreed with the government's position, saying that a subscription model does not mean being responsible. "Charging users doesn't stop abuse; it could make it more acceptable," said one campaigner, pointing out that AI-generated sexual content can cause long-term psychological harm and damage to victims' reputations.

Since it launched Grok, an AI assistant meant to compete with ChatGPT and other similar tools, X has been getting more and more criticism around the world for how it handles content moderation. Some people say that the protections around image generation have not been consistent, which lets users take advantage of the system before actions are taken, which is often too late to stop harm.

The UK government has made it clear that it wants stronger guardrails, not money barriers. If platforms don't show that they are protecting users, especially those who are most at risk of digital abuse, officials are now considering taking more regulatory action.

As generative AI becomes more common on social media, the fight between new ideas, making money, and keeping people safe is getting worse. The message for the UK is clear: making money off of risky AI features without strong protections is not only irresponsible, it could make things worse for victims who are already having trouble being heard.

Post a Comment

0 Comments