Offensive posts generated by Grok on the platform were taken down after the two clubs pushed back
Manchester United and Liverpool have successfully pushed for the removal of a series of controversial posts generated by Grok, the AI model developed by Elon Musk’s company xAI, after the content sparked widespread outrage.
The posts, which appeared on Musk’s social media platform X (formerly Twitter) over the weekend, were reportedly created after anonymous users asked the AI tool to generate messages intended to offend supporters of both clubs by referencing tragedies connected to their histories.
Among the events referenced were the 1958 Munich air disaster, the Hillsborough stadium tragedy in 1989 and the death of Liverpool forward Diogo Jota last summer. All three are of huge significance to the history of the two clubs.
The nature of the posts quickly drew complaints from both Premier League clubs to the social media platform. They were taken down later on Sunday following the complaints.
Supporters are respectfully invited to pay their respects around the 68th anniversary of the Munich Air Disaster.
— Manchester United (@ManUtd) January 30, 2026
The incident also triggered strong reactions in political circles, with key figures in the UK Government condemning the content in strong terms.
“It’s shocking and upsetting that hate-filled language like this can be generated by Grok on such a major platform,” Liverpool West Derby MP Ian Byrne told the Athletic.
He added that the posts were “appalling and completely unacceptable”, predicting they “will fill the vast majority of fans with horror and disgust”.
Byrne also raised concerns about the safeguards in place around emerging AI tools and questioned why such content was able to appear in the first place.
“Technology companies have a responsibility to ensure their tools do not produce or amplify abuse,” he said, while asking “how this was allowed to happen”.
Under the UK’s Online Safety Act introduced in 2023, the spread of threatening messages can constitute a criminal offence. The law also places responsibility on technology companies to ensure harmful content is not produced or circulated on their platforms.
A spokesperson for the Department for Science, Innovation and Technology said the posts breached basic standards of decency.
“These posts are sickening and irresponsible,” the spokesperson said. “They go against British values and decency.
“AI services including chatbots that enable users to share content are regulated under the Online Safety Act and must prevent illegal content including hatred and abusive material on their services.
“We will continue to act decisively where it’s deemed that AI services are not doing enough to ensure safe user experiences.”