
X is starting to test AI-generated Community Notes to "improve" its fact-checking system
X has launched a pilot program allowing AI chatbots to generate Community Notes starting July 1st, shifting from human-only contributions. The goal is to increase coverage on low-visibility posts that often go unaddressed by human contributors. Initially, AI notes will only appear on posts that explicitly request them.
All AI-written notes will be labeled accordingly. They are created using tools like X’s Grok and OpenAI’s ChatGPT via API. To maintain content standards, every AI note passes through an open-source automated evaluator trained on input from previous human contributors.
While X claims humans remain in control, their role is largely limited to rating notes and vetting them before wider release. The platform also uses its scoring system to reduce misinformation and minimize AI hallucinations. Though the company presents it as a collaborative step in moderation and fact-checking, concerns persist about AI reliability, particularly after Grok’s past erratic behavior on sensitive topics.

It's Grok's turn to get on Trump's good side now.
Musk and possibly billions of people are still forgetting that AI can make mistakes; it can hallucinate. What benefits can AI-generated Community Notes possibly gain? It's not really now censorship that we need to be worrying about at this point, but the fact that it's gonna end up as a meme because AI explanations can just as well go hilariously wrong!
(As a bonus: ironically, I thought at first that the Community Notes were AI-generated.)