On Saturday, tech entrepreneur Siqi Chen released an open supply plug-in for Anthropic’s Claude Code AI assistant that instructs the AI mannequin to cease writing like an AI mannequin.
Referred to as Humanizer, the easy immediate plug-in feeds Claude an inventory of 24 language and formatting patterns that Wikipedia editors have listed as chatbot giveaways. Chen revealed the plug-in on GitHub, the place it has picked up greater than 1,600 stars as of Monday.
“It’s actually useful that Wikipedia went and collated an in depth checklist of ‘indicators of AI writing,’” Chen wrote on X. “A lot as a way to simply inform your LLM to … not try this.”
The supply materials is a information from WikiProject AI Cleanup, a bunch of Wikipedia editors who’ve been looking AI-generated articles since late 2023. French Wikipedia editor Ilyas Lebleu based the venture. The volunteers have tagged over 500 articles for evaluation and, in August 2025, published a proper checklist of the patterns they saved seeing.
Chen’s software is a “skill file” for Claude Code, Anthropic’s terminal-based coding assistant, which entails a Markdown-formatted file that provides an inventory of written directions (you’ll be able to see them here) appended to the immediate fed into the big language mannequin that powers the assistant. Not like a standard system prompt, for instance, the talent data is formatted in a standardized approach that Claude fashions are fine-tuned to interpret with extra precision than a plain system immediate. (Customized abilities require a paid Claude subscription with code execution turned on.)
However as with all AI prompts, language fashions don’t at all times completely observe talent recordsdata, so does the Humanizer really work? In our restricted testing, Chen’s talent file made the AI agent’s output sound much less exact and extra informal, nevertheless it may have some drawbacks: It gained’t enhance factuality and would possibly hurt coding potential.
Specifically, a few of Humanizer’s directions would possibly lead you astray, relying on the duty. For instance, the Humanizer talent consists of this line: “Have opinions. Don’t simply report details—react to them. ‘I genuinely don’t know methods to really feel about this’ is extra human than neutrally itemizing professionals and cons.” Whereas being imperfect appears human, this type of recommendation would in all probability not do you any favors should you have been utilizing Claude to put in writing technical documentation.
Even with its drawbacks, it’s ironic that one of many internet’s most referenced rule units for detecting AI-assisted writing could assist some folks subvert it.
Recognizing the Patterns
So what does AI writing appear like? The Wikipedia information is particular with many examples, however we’ll offer you only one right here for brevity’s sake.
Some chatbots like to pump up their topics with phrases like “marking a pivotal second” or “stands as a testomony to,” in line with the information. They write like tourism brochures, calling views “breathtaking” and describing cities as “nestled inside” scenic areas. They tack “-ing” phrases onto the top of sentences to sound analytical: “symbolizing the area’s dedication to innovation.”
To work round these guidelines, the Humanizer talent tells Claude to switch inflated language with plain details and presents this instance transformation:
Earlier than: “The Statistical Institute of Catalonia was formally established in 1989, marking a pivotal second within the evolution of regional statistics in Spain.”
After: “The Statistical Institute of Catalonia was established in 1989 to gather and publish regional statistics.”
Claude will learn that and do its greatest as a pattern-matching machine to create an output that matches the context of the dialog or job at hand.
Why AI Writing Detection Fails
Even with such a assured algorithm crafted by Wikipedia editors, we’ve previously written about why AI writing detectors don’t work reliably: There’s nothing inherently distinctive about human writing that reliably differentiates it from LLM writing.
One purpose is that although most AI language fashions have a tendency towards sure varieties of language, they will also be prompted to keep away from them, as with the Humanizer talent. (Though generally it’s very tough, as OpenAI present in its yearslong struggle towards the em sprint.)

