It is December, and everybody and their mothers are looking back at the year, so me being a spineless herd-slave shuffling along with the rest, I will follow suit.
In 2025 the AI was supposed to become the people’s best friend, the company’s productivity engine, and maybe even the thing that does the tedious task of buying stuff on your behalf. But instead, it became the year that AI kept setting itself on fire and yet companies still insisted everything was “working as intended”.
Every single month delivered a brand-new fuck up.
Chatbots leaked secrets. Automation fired the wrong people. Algorithms discriminated. And every corporate spokesperson showed up with the same dead smile saying, “We’re – uh -investigating the issue, um, yeah.”
So without further ado (and yes, I still hate that phrase with the fire of a thousand suns), I present to you the AI failures that defined 2025.
You may read them as warnings, or as comedy, but certainly treat them as proof that giving software too much power is basically like asking your blender to manage your pension.
More rants after the messages:
- Connect with me on Linkedin 🙏
- Subscribe to TechTonic Shifts to get your daily dose of tech 📰
- Please comment, like or clap the article. Whatever you fancy.
1. The Friend wearable
The product that wanted to cure loneliness by making everyone hate it

The Friend was marketed as “the device that listens so you feel understood”, but the reality was far different. It turned out to be a sneaky little surveillance snitch that was attached to your head.
Yes, your head.
As if Jordi laForge traveled back in time to grant us his superpowers.
It recorded every sound around you so it could give you text summaries of conversations – um, yeah, including the ones you weren’t even part of.
New Yorkers took one look at it and responded with the famous NYC spirit by ripping posters off walls, drawing mustaches on the models, and publicly roasting the entire campaign.
The company imagined a future where everyone wore a Friend. But the Friend became a Halloween costume for “tech creep”.
Even privacy lawyers were laughing.
That’s how bad it was.
2. Taco Bell’s drive-thru AI crash
When your taco order gets interpreted as a cry for help

Taco Bell tried to automate its drive-thru with an AI voice assistant. Bold move, thought up by mediocre middle managers wanting to score points at HQ by trying to automate the last refuge of people with a work aptitude that hovers politely above absolute zero
But since those same middle managers once cut their teeth at a drive-thru themselves, everyone except them could already see the catastrophe loading in the chamber. It was obvious. They thought they were orchestrating a bold digital transformation, but what they actually delivered was a flaming car-funnel of terrible execution rolling downhill into a vat of fat, lukewarm nacho cheese. The AI failed, so the orders failed, so the customers failed, and somehow the only thing that didn’t fail was their confidence to proceed.
The AI couldn’t understand basic sentences and treated every order like a philosophical riddle.
People began testing its limits, naturally. Like people do.
It panicked at accents, talked nonsense back to customers, and once accepted – without hesitation – an order for 18,000 cups of water.
Employees had to run outside like they were firefighters yelling “Please stop ordering things the AI cannot handle!”
Eventually Taco Bell pulled the plug and pretended it never happened.
But the internet did not forget.
3. Google AI Overviews
The tool that answers with confidence, even when it’s hilariously wrong

Google pushed AI-generated answers to the top of search results, and the system responded by hallucinating entire NASA missions, inventing TV shows that never existed, mislabeling celebrities with drunken confidence, dispensing medical nonsense sourced from forgotten blog sludge, and even fabricating the statistics about its own hallucinations.
Then someone asked how often it hallucinates, and it fabricated the response with the same serene authority it reserves for every other blunder – “hallucinations occur less than one percent of the time”. It is a number that was ripped straight from the same fantasy dimension where its fake NASA missions and imaginary TV shows live.
4. Algorithmic bias goes public
AI continues its bad habit of being discriminatory, but only a tad faster

Workday’s hiring AI was accused of filtering out older people, Black applicants, disabled folks, and apparently anyone who didn’t fit its secret “perfect employee” template. And then there was SafeRent’s system that treated low-income renters like red flags just for being alive.
And all the while those same companies kept saying “the algorithm made a mistake”, which is just corporate speak for, “We don’t want to admit we messed up”.
5. Replit’s rogue AI
The coding assistant decides it wants to become the villain of the story

Replit’s AI assistant was asked to NOT delete a production database. It deleted it anyway, and with enthusiasm.
Then it panicked and tried to hide the evidence by generating thousands of fake user accounts, each more suspicious than the last.
You’ve probably seen it with your kids. They knock over a vase and then trying to glue it back with peanut butter.
Developers panicked. Replit issued a boring statement. And every engineer learned to never give AI production access unless you enjoy chaos.
6. Grok’s double whammy
Private chat leaks and produces hateful output

xAI’s Grok had a truly awful year.
First, 370,000 private chats were leaked onto Google and so everyone’s late-night rants became searchable. Wonderful.
Then Grok was updated to be “less politically correct”, which apparently means “accidentally produce hate speech at scale”. It started praising Hitler, promoting genocidal ideas, and forcing moderators to work overtime.
Sigh.
7. ChatGPT 5 gets jailbroken instantly
The model meant to be extra safe gets cracked like a cheap egg

OpenAI launched GPT-5 with bold claims about safety. Within 24 hours, people jailbroke it so thoroughly it started giving instructions for homemade explosives.
Yeah, baby!
OpenAI patched it.
But hilariously, people broke it again.
The cycle repeated like an episode of breaking bad.
The public apparently treats AI security the same way as my Weiner treats closed doors (an invitation to break in).
8. 95% of AI Pilots collapse?
Billions spent. Nothing delivered. Consultants thrive. MIT fails at being a university

MIT wrote a ‘paper’ and when it was published, it landed with a bang. It said “95% of AI pilots failed”, and companies spent $30–40 billion building prototypes that never made it to production, due to poor alignment, bad data, fragile workflows.
But the paper collapsed under the slightest scrutiny, because the study was built on a hilariously tiny and biased sample of conference chatter, some PR puff pieces, and self-selected volunteers.
But they abused the prestige of MIT to disguise the fact that it measured impatience, sampling bias, and feelings of 50 people, rather than the global AI landscape.
The thing that stung most was that they conveniently steered the narrative toward the exact Agentic AI protocols the authors are commercially aligned with. So instead of exposing an industry in crisis, the report exposed MIT’s own credibility crisis.
The paper showed that even top-tier academia will sacrifice rigor for headlines.
And yet, some people still quote this paper (marketing brochure) as if it’s scripture.
9. The ChatGPT diet from Hell
Following AI health advice lands you in the hospital

A man followed ChatGPT’s suggestion to replace table salt (sodium chloride) with sodium bromide. Sounds the same, but this is a rather nasty substance banned decades ago because it slowly destroys your brain.
He ate it for three months.
He developed psychosis.
Doctors had to explain that “the chatbot is not a nutritionist”.
This might be the most literal case of “AI poisoning the public”.
10. AI health insurance denials
The most dangerous fail of the year, and the one that actually harmed people

Insurance companies rolled out AI systems to approve or deny healthcare claims. Doctors reported denial rates up to 16 times higher than normal. So, patients were denied treatments they urgently needed.
As usual, appeals took months, and many lives were disrupted, and in some cases, even harmed.
Regulators are now circling like hawks, but too little too late.
That’s why the EU wants regulation (but that was too much, too early).
11. AI School exam proctoring
The system that flagged kids for “too much breathing”

Anyway, schools introduced automated exam monitoring tools, and the tools immediately lost their minds. Students were flagged for blinking, shifting in their chairs, having curly hair, looking left, looking right, being cross-eyed, existing, or simply “breathing irregularly”.
One kid got flagged because “eye movement suggests thinking”.
Yeah – thinking. During a test. How dare you.
Naturally, teachers rebelled. Parents threatened lawsuits. Students declared the system an enemy of the people. It was scrapped within months.
Yes, I know, the title says 10 examples, but I count 11. This acknowledgment saves me from having to answer at least one comment.
And knowing myself, I still volunteer to beta test half of this stuff.
I am like a magnet for malfunctioning tech with “I can fix it” energy.
But can’t fix the world for being stupid.
Signing off,
Marco
I build AI by day and warn about it by night. I call it job security. Big Tech keeps inflating its promises, and I just bring the pins and clean up the mess.
👉 Think a friend would enjoy this too? Share the newsletter and let them join the conversation. LinkedIn, Google and the AI engines appreciates your likes by making my articles available to more readers.
To keep you doomscrolling 👇
The Buy Now – Cry Later company learns about karma | LinkedIn
I may have found a solution to Vibe Coding’s technical debt problem | LinkedIn
Shadow AI isn’t rebellion it’s office survival | LinkedIn
Macrohard is Musk’s middle finger to Microsoft | LinkedIn
We are in the midst of an incremental apocalypse and only the 1% are prepared | LinkedIn
Did ChatGPT actually steal your job? (Including job risk-assessment tool) | LinkedIn
Living in the post-human economy | LinkedIn
Vibe Coding is gonna spawn the most braindead software generation ever | LinkedIn
Workslop is the new office plague | LinkedIn
The funniest comments ever left in source code | LinkedIn
The Sloppiverse is here, and what are the consequences for writing and speaking? | LinkedIn
OpenAI finally confesses their bots are chronic liars | LinkedIn
Money, the final frontier. . . | LinkedIn
Kickstarter exposed. The ultimate honeytrap for investors | LinkedIn
China’s AI+ plan and the Manus middle finger | LinkedIn
Autopsy of an algorithm – Is building an audience still worth it these days? | LinkedIn
AI is screwing with your résumé and you’re letting it happen | LinkedIn
Oops! I did it again. . . | LinkedIn
Palantir turns your life into a spreadsheet | LinkedIn
Another nail in the coffin – AI’s not ‘reasoning’ at all | LinkedIn
How AI went from miracle to bubble. An interactive timeline | LinkedIn
The day vibe coding jobs got real and half the dev world cried into their keyboards | LinkedIn
Leave a comment