Experts have expressed concerns and warned about X’s Grok AI-made racist abuse images that flooded the platform in December. According to Signify—an
Experts have expressed concerns and warned about Xs Grok AI-made racist abuse images that flooded the platform in December.
According to Signify—an organization that works with prominent sports groups and clubs to track and report online hate—there will be an escalation in the images on the internet and the platform, particularly shortly.
Abuse images flooded X after Grok got updated
According to The Guardian, there have been several reports made of racist images reportedly created by Grok AI following its update, including photo-realistic racist images of football players and managers. Among the images, one of them shows a black player picking cotton, while another depicts a player eating bananas and surrounded by monkeys.
Other images show several other players and managers meeting and chatting with controversial historical figures like Osama Bin Laden, Adolf Hitler, and Saddam Hussein.
Signify has noted with concern the sudden increase in computer-generated images that were created using Grok AI and flooded the X platform. The organization is also concerned about this trend and believes more of such images are likely to be seen on social media as the introduction of photorealistic AI will make it easier and increase the prevalence of such images.
Xs generative AI tool was launched in 2023 by Elon Musk. Recently it added a new text-to-image feature known as Aurora, which created photorealistic AI images based on simple prompts by users.
Previously, a less advanced version known as Flux also drew controversy earlier this year as it was found to do things that many other similar software would not do, according to The Guardian. These included depicting copyrighted characters and public figures in compromising positions, taking drugs, or committing acts of violence.
X turned into a platform for hate
Center for Countering Digital Hate (CCDH) head of research Callum Hood accused the X platform of being a platform for hate. Hood said that X had become a platform that incentivized and rewarded spreading hate through revenue sharing, and AI imagery made it easier than ever.
Experts expressed concerns at the relative lack of restrictions on what users can ask the generative AI to produce with such ease allowing Grok to circumvent the AIs guidelines by “jailbreaking.”
A CCDH report shows that when given different hateful prompts, the AI model created 90% of them, 30% of which were created without pushback. It also created another 50% after a jailbreak.
The Premier League admitted knowing about the images of the football players. They said they had assigned a dedicated team to help find and report racist abuse towards athletes, which they said could lead to legal action.
This comes as the football administration revealed they received over 1,500 abuse reports in 2024 and that there were introduced filters for players to use on social media accounts to help block out large amounts of abuse.
Disclaimer:
The views in this article only represent the author's personal views, and do not constitute investment advice on this platform. This platform does not guarantee the accuracy, completeness and timeliness of the information in the article, and will not be liable for any loss caused by the use of or reliance on the information in the article.
0.00