This is a practical guide to surviving the post-hype wasteland of failed AI projects. And this time I have dissected 50 organizations that have successfully implemented AI throughout their organization in the form of ‘Success Patterns’.
Good luck.
You’re gonna need it.
P.s. If you want to catch up on earlier volumes (yes, they’re big – sue me), go read:
- Your AI field manual 1 – Compliance and other pains | LinkedIn
- Your AI field manual 2 – Project management for the damned | LinkedIn
P.P.s. I won’t cover all patterns, because my fingers are already worn from typing, so I have attached the original research piece for you to download, including an appendix full of graphs and shit, so you can pretend to be the geek you always wanted to be, and of course here’s all the data I collected.

More rants after the messages:
- Connect with me on Linkedin 🙏
- Subscribe to TechTonic Shifts to get your daily dose of tech 📰
- Please comment, like or clap the article. Whatever you fancy.
PREFACE – READ THIS BEFORE YOU LIGHT ANOTHER BILLION ON FIRE
This ain’t a vision deck my dear, well above-intelligent friend.
You probably have seen enough of them and you got here, cause you wanted to read something tangible for a change. So, I set out to create a field manual for you, the project manager, the executive, the business developer, the start-up wannabe, or simply the person who has too much time on their hands. You were once an innovator, an early adaptor, and you have seen the demo, you have believed their promise, and now you wonder why you wake up each morning with regret you said yes when the executive board appointed you as their scapegoat.
But maybe you still have time to turn it around – to bleed forward so to say.
And if so, the third edition of the AI field manual might provide you with new insights to make failure earn its keep. Because for you, I spent a considerable amount of time and tokens for my dear Oompa Loompas, stalking fifty companies that bragged about “AI transformation”.
Rest assured. You’re not the only one. Half of them turned profit.
The rest turned into LinkedIn case studies titled “What We Learned”.
Twelve repeatable patterns separated the conquerors from the casualties.
They’re not elegant. They’re not even pretty.
They work.
Everything else belongs in the archive of “synergistic failures”.
Not you. You’re wise. You read my shit. Hubris, yes.
I know.
Pattern 1 – Command & Control
Leadership is a weapon. It is not a metaphor, but an actual weapon. Remember the first time you watched some CEO try to explain AI on camera and felt your will to live ebb away?
That’s the point I’m trying to make here.
The moment the man at the corner office decides that AI is a PR exercise, the project is dead – slowly, ignominiously (had to look that one up), and with good press. But on the contrary, real AI changes are born when someone at the top bleeds for them.
Not just pays lipservice.
Take for instance this guy – Jamie Dimon – He’s in a boardroom where other executives are doing their best to keep the company looking like a tasteful yacht. Dimon sits in the room and talks about risk in terms of models and contingency, and he signs checks that make this business controller cry. And that kind of appetite for commitment is why JPMorgan can afford a $2 billion annual war chest for AI.
Yup. 2 Billion. A year.
It’s not about swagger. His stick is about endurance. It’s also about a CEO who will fight for the budget during a downturn because she (ok, I’ll drop in a ‘she’ for a change) knows the tooling is as essential as compliance, not optional.
Years before the world started tattooing “gen-AI” on conference flyers, them peeps at Visa ripped out legacy pipelines and rebuilt its data foundation from scratch.
That is the opposite of a pilot.
Nobody takes that route unless the leadership has the patience to stay the course for a decade. And also was Mercedes-Benz, who were training neural nets in the era of dial-up. Nobody thought that was fashionable then, I know, cause we were ‘the guys doing something with data’ at the time. But they thought it was necessary. And that kind of long view is leadership as a maintenance plan, and not the press release you see popping up nowadays.
So, if your C-suite treats AI as a weekend hackathon and weaponizes it for a quarterly release, you’ll, of course, get weekend results. You’ll get experiments. Pilots. And in the end you’ll get failure stories that are swept under the carpet. But if leadership treats AI as plumbing, as a structural change you fix once and maintain forever, you get systems that survive recessions, regulatory probes, and the occasional embarrassment of a model that learned a bad habit.
Finance is artillery.
When someone says to you “we can do it on a shoestring budget” just ask them to leave the room and then lock the door, then bang your head against the wall.
The actual numbers the successful spend on their AI are equivalent to some country’s national budgets. JPMorgan’s $2 billion a year, Visa’s $3.5 billion over a decade, Amazon’s $151 billion in technology spend.
Ok, Amazon doesn’t really count here, but the ones that are successful do commit. People, and cash.
Money buys more than GPUs. It buys patience. It buys the ability to run experiments that fail, learn from them, and fund the next round. It buys the luxury of being wrong in private, rather than wrong in a press conference.
Your new doctrine is → Budget ten times what your optimistic spreadsheet forecasts. If Finance laughs, double it again.

Pattern 2 – Supply lines
If leadership is the gun and finance is the ammo, then data is both the map and the fuel that keeps the engines running. Visa possess a river of 300 billion transactions a year, and they treat every record like a rose petal. Netflix reads micro-signals from 301 million subscribers and predicts boredom before the user even says the word, and Starbucks sells decisions informed by 90 million weekly transactions.
Data is the gut of the company.
A lot of companies I’ve come across have seventeen data stores administered by seventeen gatekeepers who consider change a personal attack. That’s a recipe for trouble and regulatory fines.
Fix the data first, if you want to start with Machine Learning models. Not gen. AI. That is a toy. I mean real basic models like logistic regression or K-means clustering, those ones need data lineage and prefer clean useful longitudinal data.
Generative AI pilots of course feel good because they’re contained narratives. They fit in slide decks and they don’t require training thousands of employees. But pilots are islands, and islands drown when the tide turns. JPMorgan put tools in 150,000 hands and made usage a practice, and at Visa access to AI tools became the norm, and not the perk, and Walmart put AI in the hands of 2.1 million associates – yes, all-of-em! At their shop, AI has turned into a workplace ritual.
When AI lives only in an R&D lab, three things will happen. It becomes a pet project, it develops myths that don’t scale, and when the inevitable question comes, like “Where’s the ROI?”, your answer is a romantic anecdote about a use case that only works if the user is a perfect unicorn.
You have to spend a lot of time at scaling adoption. Training people. Putting models in tools they already use. Make AI a part of the rhythm of work, not a novelty.
Your new doctrine is → Garbage data means garbage war plans. Build your pipelines, implement that tedious, energy sucking governance, and strive for data-quality like your survival depends on it. And above all, issue every soldier a model.
Pilots are for aircraft, not strategy.

Pattern 3 – Operations
Also metrics are ammunition. Yes, it sounds cold, but AI projects are a matter of life and death for budgets since only numbers survive scrutiny of the board. The winning companies of the 50 I analyzed calculate their return on investment.
JPMorgan show $2 billion invested and $2 billion realized in savings.
Starbucks reported a 30 percent increase in ROI and inventory counting that’s eight times faster. Those figures are proof your strategy works.
Measuring matters because it forces clarity. If your KPI is “AI adoption”, then you have invented a vanity metric because adoption is not business impact. Tell Finance what changed, and for how much, and show the before and after. A/B tests, control groups, and ruthless attribution are not optional.
Another one I came across → hire professionals. Do not appoint a manager that knows a bit or two about ChatGPT. The difference between a working model and an embarrassing demo is expertise. JPMorgan employs thousands of AI specialists, and Visa fields an army of engineers tuned to production realities. Mercedes-Benz trained 600 employees as data and AI specialists in one year. Yup, though they didn’t do it because it is trendy these days but because a car with bad AI is a liability.
But the other side of the market – the ones who still linger in ‘AI success or failure limbo’ – they still hires internal “enthusiasts”. Dave from accounting who completed a weekend bootcamp and now manages your “AI initiatives” is not a cost-effective strategy. You don’t hire a pianist because they watched a few YouTube tutorials. You gotta build a proper team, people, with data engineers, ML engineers, product managers who understand the tradeoffs, ethicists who can explain risk (yes, that’s a class of people), and seasoned operations people who can finish the job. And yes, you go to compensate them. Keep them. Let them fail and learn without a press release about the failure.
Your new doctrine is → If you can’t measure it, Finance will cancel it. Skill beats enthusiasm. Recruit nerds, pay them obscene salaries (thank you!), and protect them from PowerPoint.

Pattern 4 – Risk & ethics
Govern or perish. Monday I wrote extensively about the most boring subject that comes with an AI project these days. The companies that win treat governance like life insurance. Visa’s AI Observatory is a real thing. It is a control room that watches models in real time for ethical drift, compliance sins, and performance deterioration. Mercedes treats privacy audits like the same way they treat brake checks, where no one drives off the lot without passing them. Banks established years ago that regulators do not suffer optimism. Regulators punish mistakes in public and in private, and the penalties are structural.
The rest – the unsuccessful ones – just hope that ‘compliance’ will look the other way. They form committees full of well-meaning people who like the sound of “ethics” and schedule quarterly meetings full of yada yada, where nothing gets monitored in real time. Then they confront reality, like an automated underwriting model that discriminates against a community, a customer dataset leaked because a contractor uploaded logs to a public bucket or an LLM that confidently invents legalese. And then, ethics is depositions and fines.
If you want to read more, go read Monday.
Your new doctrine is → Build oversight before scandal. Auditors always arrive faster than ROI.

Pattern 5 – The front line
Fix real problems or shut it down. Yes, it is hard, but the successful ones do it. Real AI lives in the moments customers either love you or hate you. Netflix’s recommendation engine isn’t their pet project, they consider it their the product. THE product. Eighty percent of streamed hours are routed by algorithms. Starbucks uses DeepBrew (don’t you just like data scientist’ humor) but not because it has a fashionable ring to it, but because hyper-personalized offers and inventory optimization are money. Bank of America’s Erica has handled billions of interactions by automating the mundane and surfacing the important.
Take that, my dear callcenter manager who still wrestles with connecting one back-office function to their bot.
Contrast that with chatbots that were launched to “improve engagement” but only succeed in telling customers to “please hold”. Yup, that’s most of them, and those investments get glossy case studies with miserable NPS scores. Successful AI reduces friction or predicts needs, and solves problems, and when you still treat your bot as if it’s an FAQ – well, good luck writing off your investment.
I’ve also found that Operations win wars.
This is the war the boardrooms refuse to Instagram. JPMorgan won by eliminating 90 percent of manual cash-flow reconciliation work. UPS’ ORION saves a hundred million miles a year on fuel, time, emissions, and money that compounds, and Starbucks’ inventory accuracy really doesn’t make press headlines, but it stops stores from running out of cold foam on a Tuesday morning.
If you want headlines, build a flashy demo.
If you want the company to survive, automate the back office. Those automations are invisible, but they’re where the economic gravity pulls.
And if you say, AI ain’t up to the tast, just read this piece: The post-human back office | LinkedIn
Your new doctrine is → If it doesn’t solve pain, it is the pain. The glamour is always customer-facing, but the money hides in spreadsheets. Now print it and hang it on your toilet door.

Pattern 6 – Technology & tactics
The successful mix their arsenal. The era of a single silver-bullet vendor is over. There’s no platform game if you’re an enterprise. Generative models write copy, but they don’t predict supply chain disruption. Supervised ML forecasts demand, but it can’t reason about new regulatory regimes. Computer vision spots defects but can’t translate contracts. Agentic AI promises to act across systems, but it’s early and fragile.
The winners do not worship a single class of model, but they orchestrate many instead.
Look at the landscape. 42 of the 50 companies in the study use generative AI in some form – 38 rely on classical machine learning – 28 on predictive analytics – 22 on computer vision – 26 on NLP.
The real advantage accrues to those who combine these tools into workflows like an LLM that summarizes documents, or an ML model that predicts impact, or a computer vision system that validates quality, and an agentic pipeline executes the steps. That layered approach creates resilience and capability beyond any single vendor’s promises.
And if you think this is just a load of horseshit, have a look at this end-to-end architecture I made for a client.
Experiment at speed. Time is a currency you can’t inflate. Take JPMorgan, they run hundreds of proofs of concept at the same time (sic!) because in an environment that moves this fast, you must learn at scale.
Netflix runs pervasive A/B tests so frequently that it might as well be experimental philosophy, and Starbucks encourages store managers to propose and run experiments because the frontline sees reality faster than their strategy team.
Training the troops changes everything.
Hundreds of thousands of employees trained on AI tools start treating them like instruments. JPMorgan’s 150,000 learn-by-doing AI-literacy program creates muscle memory, and Visa’s democratization makes “AI access” an expected workplace right. Walmart and Starbucks run certification programs because a literate workforce scales capability without hiring a consult firm every quarter.
Your new doctrine is → Stack your tech like pancakes, syrup is later.

Pattern 7- The strategic map
Every campaign needs a map, and every map is drawn in hindsight. Just ask Columbus. The AI wars follow three phases that every commander eventually learns to recognize.
I call ‘em the “Landing”. The “Expansion”. The “Occupation”.
Landing is chaos that is hidden under all the excitement. The CEO declares a transformation, vendors smell blood, and every department claims ownership. Budgets appear, pilots multiply, and the company is littered with “proofs of concept”, and the thing is that the clever ones use the noise to rebuild infrastructure under the cover of hype. They build the data lakes and governance frameworks, and they get the pipelines while everyone else is playing buzzword bingo.
Expansion follows when the pilots finally produce something measurable. Pilots become programs, programs become dependencies. Costs rise of course, and data friction reveals itself. and middle management and their employees panic. And now the leadership must choose between reinforcing or retreating. The winners reinforce. They push through the chaos and treat the mess as evidence of life.
Occupation is what happens in the long run. This is the phase where the hype dies out and only discipline remains. This is where you move from experimentation to refinement. You tune models, audit outcomes, automate the audits, and iterate over and over.
Your new doctrine is → Momentum is everything. The moment you stop experimenting, decay begins. Think of AI as agriculture, not architecture. You plant, tend, and harvest continuously. Ignore it for one season, and the weeds (bad data, model drift, bureaucracy) just take over the field.

Every war leaves its ruins. AI transformations are no different – maybe even worse. The victorious companies learned that leadership is endurance, that money is fuel and that data is both the battlefield as it is the weapon.
And I want you to walk away with one last truth, and that one comes from me and I found it to be liberating – AI is not a project, it’s an era.
Companies that treat it like a temporary initiative will drown in pilot purgatory. Those that build for permanence will write the new rules. It takes two years to see return, five to transform, and infinite humility to maintain.
Memento Mori, commander.
Remember that innovation dies as easily as it lives. But if you’ve built your army right with data as your ground, governance as your shield, culture as your fuel, you can make the machine remember what you learned long after the hype is gone.
That, in the end, is victory. Not the press release. Not the keynote. It is the hum of the systems that work, because you built them to last.
Te saluto. Vale,
Marco
I build AI by day and warn about it by night. I call it job security. Big Tech keeps inflating its promises, and I just bring the pins and clean up the mess.
👉 Think a friend would enjoy this too? Share the newsletter and let them join the conversation. LinkedIn, Google and the AI engines appreciates your likes by making my articles available to more readers.
To keep you doomscrolling 👇
- I may have found a solution to Vibe Coding’s technical debt problem | LinkedIn
- Shadow AI isn’t rebellion it’s office survival | LinkedIn
- Macrohard is Musk’s middle finger to Microsoft | LinkedIn
- We are in the midst of an incremental apocalypse and only the 1% are prepared | LinkedIn
- Did ChatGPT actually steal your job? (Including job risk-assessment tool) | LinkedIn
- Living in the post-human economy | LinkedIn
- Vibe Coding is gonna spawn the most braindead software generation ever | LinkedIn
- Workslop is the new office plague | LinkedIn
- The funniest comments ever left in source code | LinkedIn
- The Sloppiverse is here, and what are the consequences for writing and speaking? | LinkedIn
- OpenAI finally confesses their bots are chronic liars | LinkedIn
- Money, the final frontier. . . | LinkedIn
- Kickstarter exposed. The ultimate honeytrap for investors | LinkedIn
- China’s AI+ plan and the Manus middle finger | LinkedIn
- Autopsy of an algorithm – Is building an audience still worth it these days? | LinkedIn
- AI is screwing with your résumé and you’re letting it happen | LinkedIn
- Oops! I did it again. . . | LinkedIn
- Palantir turns your life into a spreadsheet | LinkedIn
- Another nail in the coffin – AI’s not ‘reasoning’ at all | LinkedIn
- How AI went from miracle to bubble. An interactive timeline | LinkedIn
- The day vibe coding jobs got real and half the dev world cried into their keyboards | LinkedIn
- The Buy Now – Cry Later company learns about karma | LinkedIn
Leave a comment