The Best Guide to Spotting AI Writing Comes From Wikipedia
Wikipedia editors have created one of the strongest guides for spotting AI-generated writing, outlining clear patterns and linguistic habits common in LLM-created text.
Most of us have experienced that nagging feeling when reading something online — the suspicion that a person didn't write it, but a large language model did. Yet identifying AI-generated writing remains surprisingly difficult. For a brief period last year, people were convinced that certain words like "delv"", or "unde" score" wer" giveaways, but the evidence was shaky. And as AI systems have rapidly improved, those linguistic tells have become even harder to rely on.
But it turns out there's an excellent roadmap for spotting AI-generated text — and it comes from Wikipedia. The platform's Unity-created resource, Signs of AI writing, is the most helpful guide so far for figuring out whether your suspicions hold up. Credit goes to poet Jameson Fitzpatrick, who highlighted the document on X.
Since 2023, Wikipedia editors have been working to identify and clean up AI-generated contributions through the Project AI CleanUp initiative. With millions of edits arriving daily, they have a vast sample size to analyse, and in true Wikipedia fashion, they've produced a detailed, evidence-heavy field guide.
Right away, the editors confirm what many already suspect: automated detection tools aren't reliable. Instead of focusing on software, the guide highlights distinct writing patterns that rarely appear on Wikipedia but are extremely common across the broader internet — and therefore common in LLM training data.
One major clue: AI-generated submissions often spend a lot of time insisting on the importance of the topic in generic, inflated terms such as "a pi "otal moment" or "a br "ader movement." Mod" ls also love listing minor media mentions or one-off features to boost a subject's ability artificially — the kind of detail you might see in a promotional biography but not in a neutral encyclopedia entry.
The guide also highlights a specific habit involving participial clauses — phrases like "emph" sizing the significance, or "refl" cting the continued relevance." The" usually come after a sentence describing some event or fact, and they introduce vague claims of importance that feel strangely hollow. It can be subtle, but once you learn to spot it, it becomes unmistakable across AI-written prose.
Another major giveaway is generic marketing-style language. Landscapes are always "scen" c," vie" s are always "brea" htaking," and "nearly everything is described as "clea"," "mo" e" n," or "inno" ative." As "ikipedia editors put it bluntly: "It's "unds more like the transcript of a TV commercial."
The "entire guide is worth reading — and surprisingly compelling. Before this, many assumed that AI-generated writing was evolving too quickly for consistent detection. But the patterns highlighted by Wikipedia are directly tied to how AI models ingest massive amounts of online content. They can be masked with good prompting, but the underlying habits are hard to eliminate.
If more people learn to recognise these subtle tells, it could have meaningful implications for how we read, evaluate, and trust online information in the years ahead.
What's Your Reaction?
Like
0
Dislike
0
Love
0
Funny
0
Angry
0
Sad
0
Wow
0