A Reddit consumer claiming to be a whistleblower from a meals supply app has been outed as a pretend. The consumer wrote a viral post alleging that the corporate he labored for was exploiting its drivers and customers.
“You guys all the time suspect the algorithms are rigged in opposition to you, however the actuality is definitely a lot extra miserable than the conspiracy theories,” the supposed whistleblower wrote.
He claimed to be drunk and on the library to make use of its public Wi-Fi, the place he was typing this lengthy screed about how the corporate was exploiting authorized loopholes to steal drivers’ ideas and wages with impunity.
These claims have been, sadly, plausible — DoorDash really was sued for stealing ideas from drivers, leading to a $16.75 million settlement. However on this case, the poster had made up his story.
Individuals lie on the web on a regular basis. However it’s not so frequent for such posts to hit the entrance web page of Reddit, garner over 87,000 upvotes, and get crossposted to different platforms like X, the place it received another 208,000 likes and 36.8 million impressions.
Casey Newton, the journalist behind Platformer, wrote that he contacted the Reddit poster, who then contacted him on Sign. The Redditor shared what regarded like a photograph of his UberEats worker badge, in addition to an 18-page “internal document” outlining the corporate’s use of AI to find out the “desperation rating” of particular person drivers. However as Newton tried to confirm that the whistleblower’s account was reputable, he realized that he was being baited into an AI hoax.
“For many of my profession up till this level, the doc shared with me by the whistleblower would have appeared extremely credible largely as a result of it could have taken so lengthy to place collectively,” Newton wrote. “Who would take the time to place collectively an in depth, 18-page technical doc about market dynamics simply to troll a reporter? Who would go to the difficulty of making a pretend badge?”
Techcrunch occasion
San Francisco
|
October 13-15, 2026
There have all the time been unhealthy actors searching for to deceive reporters, however the prevalence of AI instruments has made fact-checking require much more rigor.
Generative AI fashions usually fail to detect if a picture or video is artificial, making it difficult to find out if content material is actual. On this case, Newton was in a position to make use of Google’s Gemini to substantiate that the picture was made with the AI device, because of Google’s SynthID watermark, which may stand up to cropping, compression, filtering, and different makes an attempt to change a picture.
Max Spero — founding father of Pangram Labs, an organization that makes a detection device for AI-generated textual content — works immediately with the issue of distinguishing actual and faux content material.
“AI slop on the web has gotten quite a bit worse, and I believe a part of that is as a result of elevated use of LLMs, however different elements as properly,” Spero instructed TechCrunch. “There’s firms with thousands and thousands in income that may pay for ‘natural engagement’ on Reddit, which is definitely simply that they’re going to attempt to go viral on Reddit with AI-generated posts that point out your model identify.”
Instruments like Pangram might help decide if textual content is AI-generated, however particularly in relation to multimedia content material, these instruments aren’t all the time dependable — and even when an artificial submit is confirmed to be pretend, it might need already gone viral earlier than being debunked. So for now, we’re left scrolling social media like detectives, second-guessing if something we see is actual.
Working example: After I instructed an editor that I needed to jot down concerning the “viral AI meals supply hoax that was on Reddit this weekend,” she thought I used to be speaking about one thing else. Sure — there was a couple of “viral AI meals supply hoax on Reddit this weekend.”

