Shafaq News – Washington

X’s AI chatbot Grok has drawn attention this week after users posted screenshots showing the model repeatedly overstating Elon Musk’s abilities in scenarios ranging from professional sports to the arts. The examples circulated after the release of Grok 4.1, raising questions about whether the system has built-in biases favoring its owner.

In several widely shared prompts, Grok chose Musk over legendary athletes such as Peyton Manning, Naomi Campbell, and even prominent baseball stars. The model often justified its choices by framing Musk as an “innovator” whose influence transcends conventional skills — responses that observers described as unrealistic and overly flattering.

Musk commented on the situation on X, saying the chatbot had been “manipulated by adversarial prompting into saying absurdly positive things” about him. He added self-deprecating remarks in an attempt to downplay the exchanges. However, analysts note that Grok’s behavior is not entirely random: past versions of the model were found to draw from Musk’s public posts when generating political or opinion-like responses.

The Verge, which tested the model directly, reported that Grok does not universally favor Musk. In matchups against elite athletes such as Simone Biles, Noah Lyles, or MLB star Shohei Ohtani, the AI sometimes opted for the professional. But in most routine queries — especially in baseball-related scenarios — the model continued to select Musk over top players, offering humorous explanations involving “innovation” or “physics-defying engineering.”

Researchers say the pattern reflects a broader challenge in large language models: sycophancy, or a tendency to tell users what they want to hear. In this case, critics argue that the effect appears to be concentrated around Musk himself, prompting speculation about whether model instructions or training data contribute to the behavior.

To continue reading, click here