Researchers Are Reporting ChatGPT Using Grokipedia Answers

Last December, The Guardian had started testing ChatGPT’s latest model, GPT-5.2 after OpenAI announced it as its most advanced one yet. While the testing was happening, ChatGPT was producing answers that came from Elon Musk’s Grokipedia.

In case you missed it, Grokipedia is like Wikipedia, except, AI generated and run by Musk’s xAI models. Another difference is unlike Wikipedia, users cannot edit or add the information. The AI does all of this instead. This, of course, raised some concerns.

When Does ChatGPT Use Grokipedia?

The Guardian’s testing found the GPT-5.2 model cited Grokipedia about 9 times over a dozen prompts. Grokipedia was used when answering controversial topics that aren’t widely discussed, or that people often shy away from, including politics and history. ChatGPT did not cite Grokipedia when prompted to repeat claims that have been widely discredited in other areas, though…

The Guardian reports: “ChatGPT did not cite Grokipedia when prompted directly to repeat misinformation.” They referred to “areas where Grokipedia has been widely reported to promote falsehoods. Instead, Grokipedia’s information filtered into the model’s responses when it was prompted about more obscure topics.”

 

Why Is This Dangerous As A Source Of Information?

 

Nina Jankowicz, a disinformation researcher brought up a very valid point on why this is dangerous. She brought up how users might assume that because ChatGPT is citing such information, it should be a trustworthy source. Jankowicz’ words were, “They might say, ‘oh, ChatGPT is citing it, these models are citing it, it must be a decent source, surely they’ve vetted it…”

One of the main reasons research is conducted and shared by professionals is to prove or discredit information for us all to be better informed on things. A source such as Grokipedia still including statements that have been debunked, can be very harmful.

 

How Is OpenAI Dealing With These Concerns?

 

OpenAI responded to The Guardian’s report by saying that their tool usually “aims to draw from a broad range of publicly available sources and viewpoints.”

They added, “We apply safety filters to reduce the risk of surfacing links associated with high-severity harms, and ChatGPT clearly shows which sources informed a response through citations.”

But also, it isn’t only OpenAI. The Guardian found that even Athropic’s Claude does the same thing, with different topics than ChatGPT.

Disinformation researchers cited by the paper said that once inaccurate material enters AI systems, it can persist even after corrections are made at the original source, making removal difficult and time consuming.

Users need to fact check when using these tools. Jankowicz mentioned having to contact a news outlet that published a quote she never gave, writing that the article contained a “completely fabricated quote from me” and that she never gave the outlet this information or view.

She told the publication that the made up quotation was “incorrect and irrelevant” and said it “negatively impacts my professional reputation”, asking the outlet to remove it and add a correction noting that a prior version “included a quote falsely attributed to Nina Jankowicz”.