Over the past few days an Anonymous post on Reddit (Archive.org link since the original has been deleted) that alleged significant fraud at an unnamed food delivery app. The post made some serious allegations and the entire thing just exploded everywhere with a lot of discussions on how this kind of behavior is true. The reason everyone thought it was true was because Gig based companies have been caught doing similar things in the past.
Now here’s the twist that no one expected, apparently the whole thing was a hoax. Yes, you read that correctly. Casey Newton at Platformer has posted an entire writeup on this Platformer.news: Debunking the AI food delivery hoax that fooled Reddit that is a fascinating read. You should check out the whole writeup for the details on how Casey figured out it was a hoax. The part which was really scary is towards the end of the article where he talks about how AI/LLM is making fact checking harder.
“On the other hand, LLMs are weapons of mass fabrication,” said Alexios Mantzarlis, co-author of the Indicator, a newsletter about digital deception. “Fabulists can now bog down reporters with evidence credible enough that it warrants review at a scale not possible before. The time you spent engaging with this made up story is time you did not spend on real leads. I have no idea of the motive of the poster — my assumption is it was just a prank — but distracting and bogging down media with bogus leads is also a tactic of Russian influence operations (see Operation Overload).”
For most of my career up until this point, the document shared with me by the whistleblower would have seemed highly credible in large part because it would have taken so long to put together. Who would take the time to put together a detailed, 18-page technical document about market dynamics just to troll a reporter? Who would go to the trouble of creating a fake badge?
Today, though, the report can be generated within minutes, and the badge within seconds. And while no good reporter would ever have published a story based on a single document and an unknown source, plenty would take the time to investigate the document’s contents and see whether human sources would back it up.
I’d love to tell you that, having had this experience, I’ll be less likely to fall for a similar ruse in the future. The truth is that, given how quickly AI systems are improving, I’m becoming more worried. The “infocalypse” that scholars like Aviv Ovadya were warning about in 2017 looks increasingly more plausible. That future was worrisome enough when it was a looming cloud on the horizon. It feels differently now that real people are messaging it to me over Signal.
We are going to see it more and more of this going forward. The only way to counter is to double or triple check everything you read online, especially if it is baiting you into outrage. I try to do the same thing when I write about stuff but there are times when I have been fooled as well and have usually posted a comment on the post (or a correction in it) explaining it. Basically if it seems too good to be true, it probably is.
Source: @inthehands@hachyderm.io
– Suramya


