Want to disguise your AI writing? Start with Wikipedia’s new list

Have you ever read an article or social post and thought, “This is terrible! I bet it was written by AI!”?
Most people know bad AI writing when they see it. But unless you’re a closeted copyeditor, it’s surprisingly hard to put your finger on exactly why AI writing sucks.
Now, Wikipedia’s editor team has just released what amounts to a master class in the clichés, strange tropes, obsequious tones of voice, and other assorted oddities of AI-generated prose.
It’s a list called Signs of AI Writing, and it’s a fantastic resource for people who want to get better at spotting AI writing–or who want to disguise their own.
Add your own slop
As one of the internet’s most trusted sources of information, Wikipedia is uniquely exposed to the risks of LLM-generated content.
Large language models love to pontificate on random topics, even when they have very little actual knowledge. Wikipedia covers many of these random topics, from the ash content of Morbier cheese to the gorey details of Justin Bieber’s love life.
Wikipedia famously crowdsources its information through a network of volunteer contributors and editors. This combination of crowdsourced data and highly specific, niche topics is a recipe for the misuse of AI.
There’s also an increasingly potent financial incentive for people to pollute Wikipedia with AI slop. As search engines like Google laser in on EEAT–a tortured acronym that describes the authoritativeness of a brand–having a Wikipedia page is becoming more valuable to brands as a metric for their legitimacy.
You’re not supposed to create or edit your own Wikipedia page, but many brands do. And one of the easiest ways to hide this off-label tinkering is to drown one’s nefarious edits in a sea of seemingly unrelated updates and contributions to esoteric Wikipedia pages. AI can spin these up at scale.
Everything is fascinating
Because of the risks that AI-generated content poses to the site, Wikipedia’s editors have gotten incredibly good at recognizing AI writing. Their Signs of AI Writing document distills this knowledge into an easy-to-follow guide.
Wikipedia’s list is useful and unique largely because it’s so specific. Many other rubrics for recognizing AI writing offer broad, generic advice or focus on detection “hacks” that are easy to bypass.
Researchers recently realized, for example, that LLMs tend to overuse the em dash—a wonderful and remarkably versatile punctuation mark that I happen to absolutely love.
As I recently discussed with Slate, for a brief moment, the presence of an em dash in an article was a good way to detect AI writing. Quickly, though, AI content generators caught on and started to avoid the punctuation mark.
Simple hacks for detecting AI writing have a limited shelf life. The arms race of AI content creation and AI content detection means these methods are quickly rendered useless as soon as they’re made public.
Wikipedia’s guidelines go much deeper. Rather than focusing on quick detection hacks, they dig into the more fundamental patterns present in bad AI content–the writing conventions and literary tropes that LLMs consistently overuse.
Wikipedia’s editors point out, for example, that LLMs place “undue emphasis on symbolism and importance.”
Everything LLMs write “stands as a symbol” of something, or carries “enhanced significance.” Natural locations are always “captivating,” all animals are “majestic” and everything is “diverse” and “fascinating.”
Wikipedia’s editors also note that LLMs tend to overuse transition words and phrases like “in summary” or “overall.” Often these show up as “negative parallelisms.” For example, LLMs love to summarize things they’re already written with tropes like: “It’s not only… but also…”
A restaurant might be described as “not only a great place for Italian food, but also a shining example of local entrepreneurship.” Every concluding paragraph starts with “In conclusion” or “In summary.”
The editors also point out that AI writing often overuses the “Rule of Three”–a handy literary trick that capitalizes on the fact that humans’ brains love groups of three things. A person might be “creative, smart and funny” according to ChatGPT, or a company could be “innovative, rule-breaking and impactful.”
Good writing gone bad
Interestingly, Wikipedia’s editors acknowledge that many of these conventions would be considered good writing if they came from a human. It’s not that LLMs are inherently bad at writing—it’s just that they write in predictable ways that make their output feel formulaic and robotic.
The editors also note that LLMs’ polished writing style and tendency to follow conventions often serves to obscure their lack of actual knowledge about a topic.
By following conventions like the Rule of Three, LLMs make their “superficial explanations appear more comprehensive.” As readers, we often mistake good form for good content—if an LLM writes with perfect grammar and its content flows beautifully, we might not realize that it’s not actually saying anything useful or substantive.
Beyond these stylistic issues, Wikipedia’s list goes into extreme detail about technical specifics of AI writing–the ways LLMs consistently format text, use headings, handle punctuation (like curly quotation marks), and sprinkle their content with bolded words and emojis.
Spot it (or make it)
The guidelines are useful for anyone who edits Wikipedia. But they’re also relevant for anyone who wants to get better at recognizing AI writing—or who wants to create their own AI content that doesn’t sound machine-generated.
If you’re reading an article or social media post that feels a bit off and you’re curious whether it might be AI-written, Wikipedia’s guidelines provide a fantastic checklist for validating your suspicions.
Compare the suspect writing with Wikipedia’s list. Do you see the Rule of Three appear a bit too consistently? Are there too many transition words? Does it sound too effusive?
Although the editors stress that humans are perfectly capable of generating bland and formulaic writing without an AI’s help, spotting these patterns in a piece of writing can lend credence to the idea that it was written by a machine.
And if you use LLMs to create content for your business—or even for personal emails or social posts—Wikipedia’s list can help you tweak it so it’s genuinely readable and doesn’t sound quite so robotic.
As a human editor, you can manually scan the output of ChatGPT, Claude or Gemini for the patterns Wikipedia identifies, and inject your own human touch when the chatbots start sounding a bit too AI.
There’s an easier approach, too. I’ve found that pasting Wikipedia’s entire Signs of Writing list into a chatbot as part of your prompt yields noticeably better writing than LLMs produce alone.
Spinning up a social post for your band’s first mall gig, or generating the landing page copy for your crochet business’ Square page?
Prompt ChatGPT or Claude as you normally would, but tell the chatbot to “avoid the items on this list.” Then, paste in the full contents of Wikipedia’s Signs page. Your LLM-generated writing will feel markedly better, with very little effort. Make sure to use your powers for good!
With their specificity, focus on stylistic rather than technical patterns, and attention to subtle details of AI writing (see, Rule of Three!), Wikipedia’s list is a fantastic tool for anyone who wants to spot lazy AI writing–or make their own AI content feel a bit less lazy and generic.
What's Your Reaction?






