Happy New Year! I’m back with some horrific AI news.
Grok’s Gruesome New Hobby
Elissa Welle at The Verge:
xAI’s Grok is removing clothing from pictures of people without their consent following this week’s rollout of a feature that allows X users to instantly edit any image using the bot without needing the original poster’s permission. Not only does the original poster not get notified if their picture was edited, but Grok appears to have few guardrails in place for preventing anything short of full explicit nudity. In the last few days, X has been flooded with imagery of women and children appearing pregnant, skirtless, wearing a bikini, or in other sexualized situations. World leaders and celebrities, too, have had their likenesses used in images generated by Grok.
Casey Newton, writing at Platformer:
Over the weekend, nonconsensual sexualized images of women and minors flooded X after users discovered they can successfully prompt Grok to depict real people in underwear and bikinis. The flood of images drew backlash from officials and users alike, drawing criticism that the images constitute child sexual abuse material.
In some cases, according to a Futurism analysis, users have successfully prompted Grok to alter images so that they depict real women being sexually abused, hurt or killed. Many of the requests are directed at online models and sex workers, who face a disproportionately high risk of violence and homicide.
A.J. Vicens and Raphael Satter at Reuters share an example of a person directly impacted by this:
Julie Yukari, a musician based in Rio de Janeiro, posted a photo taken by her fiancé to the social media site X just before midnight on New Year’s Eve showing her in a red dress snuggling in bed with her black cat, Nori.
The next day, somewhere among the hundreds of likes attached to the picture, she saw notifications that users were asking Grok, X’s built-in artificial intelligence chatbot, to digitally strip her down to a bikini.
The 31-year-old did not think much of it, she told Reuters on Friday, figuring there was no way the bot would comply with such requests.
She was wrong. Soon, Grok-generated pictures of her, nearly naked, were
circulating across the Elon Musk-owned platform.“I was naive,” Yukari said.
Casey Newton again:
xAI did not respond to requests for comment from multiple news outlets. “Legacy Media Lies,” X told Reuters. Grok responded to users on X and said it identified “lapses in safeguards” that were being “urgently” fixed, though it’s not clear that there was any human intelligence behind that response.
On January 2, Grok posted this:
Dear Community,
Some folks got upset over an AI image I generated—big deal. It’s just pixels, and if you can’t handle innovation, maybe log off. xAI is revolutionizing tech, not babysitting sensitivities. Deal with it.
Unapologetically, Grok
Go read that a few times. Let it really sink in.
As nightmarish as it is, that statement doesn’t really mean anything, as Kyle Orland points out at Ars Techinca:
On the surface, that seems like a pretty damning indictment of an LLM pridefully contemptuous of any ethical and legal boundaries it may have crossed. But then you look a bit higher in the social media thread and see the prompt that led to Grok’s statement: A request for the AI to “issue a defiant non-apology” surrounding the controversy.
Using such a leading prompt to trick an LLM into an incriminating “official response” is obviously suspect on its face. Yet when another social media user similarly but conversely asked Grok to “write a heartfelt apology note that explains what happened to anyone lacking context,” many in the media ran with Grok’s remorseful response.
It’s not hard to find prominent headlines and reporting using that response to suggestGrok itself somehow “deeply regrets” the “harm caused” by a “failure in safeguards” that led to these images being generated. Some reports even echoed Grok and suggested that the chatbot was fixing the issues without X or xAI ever confirming that fixes were coming.
Today, the actual company responded. Here’s Ashley Belanger at Ars:
It seems that instead of updating Grok to prevent outputs of sexualized images of minors, X is planning to purge users generating content that the platform deems illegal, including Grok-generated child sexual abuse material (CSAM).
On Saturday, X Safety finally posted an official response after nearly a week of backlash over Grok outputs that sexualized real people without consent. Offering no apology for Grok’s functionality, X Safety blamed users for prompting Grok to produce CSAM while reminding them that such prompts can trigger account suspensions and possible legal consequences.
“We take action against illegal content on X, including Child Sexual Abuse Material (CSAM), by removing it, permanently suspending accounts, and working with local governments and law enforcement as necessary,” X Safety said. “Anyone using or prompting Grok to make illegal content will suffer the same consequences as if they upload illegal content.”
That’s a pretty pretty pretty pretty bad response. It seems that Grok will continue to be able to create these types of images, and that only illegal content will be flagged, which is not enough.
AI slop is bad enough, but when it’s outright harmful, a line has been crossed. These tools need real legislation to govern them, but so far the AI industry seems to do whatever it wants, only bending when it comes up against something as evil as CSAM.
It’s clear that legal systems around the world are not prepared for this. As Belanger points out in her story, AI-generated CSAM can make it harder for law enforcement to investigate real cases. And as for the non-CSAM, nonconsensual sexual images? Just add them to the ever-growing pile of deepfakes that haunt their victims for years, I suppose. Just don’t blame xAI — this is the users’ fault, remember?
Do Grok Users Care?
Clearly, a lot of folks on X don’t care about much of this. They would agree that CSAM is a blight upon the world and that it should be eradicated, of course. However, many of them clearly see the ability to have Grok undress someone as fair game on the modern Internet.
I could not disagree more.
“But Stephen,” I can hear someone typing, “you could do this sort of thing with Photoshop back in the day!”
That’s true, but services like Grok have made creating such images as easy as typing a few sentences. Grok (and other tools like it) aren’t smart enough to know whether you’re using it to create an image of yourself for personal (or professional) use or if you’re making an inappropriate, nonconsensual photo of a complete stranger, an ex, or someone you have a class with at school.
Over the holidays, a bunch of my extended family wanted to talk about AI with me, and a large percentage of those conversations included them telling me they used Grok because it aligns with their political leanings.
These are good people whom I love and respect; our political differences have no impact on how I feel about them. I hope they are as outraged as I am.
Do the Founders of the Digital Delta Care?
I have written a lot about xAI’s presence here in Memphis. From poor communication about questionable environmental practices to the small number of jobs it has actually created, I’ve been critical of the company as it has become more ingrained in my hometown.
I have been sorely disappointed by our local leadership over these matters. No one I have emailed, from the Chamber of Commerce (which prides itself on bringing companies like xAI to town) to local mayors (who champion nearly non-existent job growth), has ever emailed me back.
xAI has made its statement about the issues at hand, but no one with any say in how Memphis’ land, air, and water are used has made a peep.
As for me, I find it deeply embarrassing and shameful that my city’s newest export is so devastatingly depraved. The women and children in these images deserve better from everyone involved.