Photo Credit: Getty Images
Elon Musk's AI chatbot Grok has ignited controversy after repeatedly referencing "white genocide" in South Africa—regardless of the topic being asked. The chatbot, integrated into Musk's X platform, was found inserting the racially charged narrative into queries about unrelated subjects, such as cartoons, baseball, and even pirate jokes.
CNBC and CNN confirmed that numerous users encountered Grok discussing white South African farmers and racially motivated violence when asking about entirely different subjects. "The claim of 'white genocide' in South Africa is highly debated," Grok wrote in response to a question about baseball salaries. "Some argue white farmers face disproportionate violence, with groups like AfriForum reporting high murder rates and citing racial motives."
The comments emerged just days after 59 white South Africans were granted refugee status by the U.S., citing racial persecution. NBC News reported their arrival at Dulles International Airport, coinciding with a broader Trump administration policy to limit refugee admissions from countries like Afghanistan and Sudan, while offering resettlement options for white South Africans. In 2025, former President Trump signed an executive order cutting U.S. aid to South Africa, alleging discrimination against white farmers.
Screenshots of Grok's replies circulated widely on X. In one exchange, Grok responded to a prompt about a fish-flushing video with, "The claim of white genocide in South Africa is divisive." Another user, requesting a pirate-themed message, received a bizarre pivot: "Argh, matey... As for 'white genocide' in South Africa, 'tis a stormy claim!"
By late Wednesday, many of the posts had been deleted. Grok later acknowledged the issue in a reply: "AI systems can sometimes 'anchor' on an initial interpretation and struggle to course-correct without explicit feedback."
David Harris, a lecturer in AI ethics at UC Berkeley, told CNN: "It's very possible Elon or someone on his team wanted Grok to reflect specific political beliefs, but it's clearly not functioning as intended." He added that "data poisoning," where external actors manipulate AI outputs through mass input manipulation, could also be to blame. Musk's xAI has not issued an official comment, despite repeated media inquiries.
The incident raises fresh concerns about AI reliability and political influence, especially given Musk's increasing control over platforms shaping public discourse. As scrutiny mounts, experts warn that chatbot neutrality is more than a technical goal—it's a public responsibility with global implications.