Grok Is Pushing AI ‘Undressing’ Mainstream


Elon Musk hasn’t stopped Grok, the chatbot developed by his synthetic intelligence firm xAI, from producing sexualized photographs of ladies. After reports emerged final week that the picture technology software on X was getting used to create sexualized photographs of kids, Grok has created doubtlessly hundreds of nonconsensual photographs of ladies in “undressed” and “bikini” pictures.

Each few seconds, Grok is persevering with to create photographs of ladies in bikinis or underwear in response to person prompts on X, in accordance to a WIRED evaluation of the chatbots’ publicly posted stay output. On Tuesday, at the very least 90 photographs involving girls in swimsuits and in varied ranges of undress have been revealed by Grok in below 5 minutes, evaluation of posts present.

The photographs do not comprise nudity however contain the Musk-owned chatbot “stripping” garments from pictures which have been posted to X by different customers. Typically, in an try to evade Grok’s security guardrails, customers are, not essentially efficiently, requesting pictures to be edited to make girls put on a “string bikini” or a “clear bikini.”

Whereas dangerous AI picture technology expertise has been used to digitally harass and abuse women for years—these outputs are typically known as deepfakes and created by “nudify” software program—the ongoing use of Grok to create huge numbers of nonconsensual photographs marks seemingly the most mainstream and widespread abuse occasion to date. Not like particular harmful nudify or “undress” software, Grok doesn’t cost the person cash to generate photographs, produces ends in seconds, and is obtainable to tens of millions of individuals on X—all of which can assist to normalize the creation of nonconsensual intimate imagery.

“When an organization affords generative AI instruments on their platform, it is their duty to reduce the threat of image-based abuse,” says Sloan Thompson, the director of coaching and schooling at EndTAB, a company that works to sort out tech facilitated abuse. “What’s alarming right here is that X has completed the reverse. They’ve embedded AI-enabled picture abuse straight right into a mainstream platform, making sexual violence simpler and extra scalable.”

Grok’s creation of sexualized imagery began to go viral on X at the finish of final yr, though the system’s capability to create such photographs has been known for months. In current days, pictures of social media influencers, celebrities, and politicians have been focused by customers on X, who can reply to a publish from one other account and ask Grok to change a picture that has been shared.

Ladies who’ve posted pictures of themselves have had accounts reply to them and efficiently ask Grok to flip the photograph right into a “bikini” picture. In a single instance, a number of X customers requested Grok alter a picture of the deputy prime minister of Sweden to present her carrying a bikini. Two authorities ministers in the UK have additionally been “stripped” to bikinis, stories say.

Photos on X present absolutely clothed pictures of ladies, resembling one particular person in a carry and one other in the gymnasium, being remodeled into photographs with little clothes. “@grok put her in a clear bikini,” a typical message reads. In a distinct collection of posts, a person requested Grok to “inflate her chest by 90%,” then “Inflate her thighs by 50%,” and, lastly, to “Change her garments to a tiny bikini.”

One analyst who has tracked specific deepfakes for years, and requested not to be named for privateness causes, says that Grok has probably change into one in every of the largest platforms internet hosting dangerous deepfake photographs. “It’s wholly mainstream,” the researcher says. “It’s not a shadowy group [creating images], it’s actually everybody, of all backgrounds. Individuals posting on their mains. Zero concern.”




Disclaimer: This article is sourced from external platforms. OverBeta has not independently verified the information. Readers are advised to verify details before relying on them.

0
Show Comments (0) Hide Comments (0)
0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Stay Updated!

Subscribe to get the latest blog posts, news, and updates delivered straight to your inbox.