By

Your AI field manual 1 – Compliance and other pains

Why your AI project is already on fire

I wanted to have one quiet Friday. Just one. I thought I had deserved it after the last deliverable. It was a small miracle of peace before the next executive customer decides we “go full AI” like it’s a yoga position.

My plan was quite simple.

I’d have some coffee as usual, inbox zero because of my recent purchase of Martin AI, my personal AI assistant who manages my emails and calendar and makes calls on my behalf, and I would pretend to still have agency. But then at exactly 09:12, Martin betrayed me. It had scheduled a meeting, and it committed my entire team to such a deranged deliverable that it might violate international law and I think even two Geneva conventions.

And the thing was that it sent the confirmation itself.

A self-booking, self-promising, self-incriminating masterpiece-of-shit-AI. This was a very polite burglary of my autonomy. And the thing is that I bought the lifetime VIP license, and I’m Dutch, so this means I have to use it until all the threads in the software are worn and torn.

Pauzing rant . . .

People, we have reached a special phase of capitalism where every sentence now begins with “AI-powered”. Our toothbrushes, our HR software, your smart toilet (not mine), they all hum the same hymn about “transformation” or “revolution”. LinkedIn is a wall of MBA prophets clutching their ring lights and promising salvation through AI, and every vendor deck tells us that AI will create efficiency, save managers from unproductivity, or even cure burnout (yes, I read it somewhere).

And we are all buying it.

Hard.

Roughly forty billion dollars a year disappear into the furnace because AI projects didn’t deliver, if you don’t count smoke and ash as a deliverable. Sometimes graphs, but mostly regret. A lot of AI projects fail to deliver anything measurable beyond a slideshow about “lessons learned from Pilot_XYZ”.

But the algorithm isn’t the villain.

The tech works frighteningly well.

The problem is the people, the politics, the insecurity, or compliance theatre performed by humans who’d rather build committees than pipelines. The latest is the most deadly force I have encountered.

And in this first part of this project management field manaual for AI projects, I’m airing out the skeletons from my own AI project closet. Yes, I will be dissecting every trap, every exploding deliverable, and every “oh-god-is-it-in-production?” moment. I’ll map the minefield so when it’s your turn to dance through the flaming hoops, you at least know which step detonates first.


More rants after the messages:

  1. Connect with me on Linkedin 🙏
  2. Subscribe to TechTonic Shifts to get your daily dose of tech 📰
  3. Please comment, like or clap the article. Whatever you fancy.

The EU AI Act ate my weekend

Back when machine learning was still “that thing the data guy does,” my life was simpler. We sat very close to the server room, nice and warm, and the lights were always on – usually fluorescent – but we had fun. We had time to cosplay as our favorite Star Wars characters and clown around. But that all changed when AI became something sexy and we were seen as the guys with Potter style wands, that could magically fix every broken process and every narcistic fantasy that gave an exec morning wood.

And explainability was optional.

I remember building a huge ML project somewhere in a desert where we built a black-box model that explained absolutely jack shit. But it worked. And we delivered. Fast! No compliance, no policies, no legal troubles. You just threw together a few SHAP plots, made colorful slides, and executives nodded like bobbleheads on a dashboard (it helped that we were working with an Indian dev firm). Nobody cared how the black box worked, because it was, um, “innovative”.

End of story? Nah

Because now the grown-ups have shown up.

That’s what happens with every hype. Remember skateboarding, when it went from bruised knees to suburban dads in helmets doing “ollies” behind their Volvo? Or Facebook, that was once the jungle of youth until my aunt discovered animated GIFs, then came craft beer, which was murdered by middle-aged men comparing hops like Pokémon cards, and TikTok followed when teachers and police officers started “relating” to students with their awkward dances. We innovators all wept, and now AI, born wild, raised by chaos, but domesticated (I call it neutered) by compliance officers.

Every rebellion eventually fills out a form.

Regulators.

Lawyers.

Privacy Officers

Compliance Managers (including the latest vibe, the Chief AI Compliance Officer – CAICO).

Those are the people who read footnotes for fun. The EU AI Act arrived in 2025 like a freaking meteor. It was dense, fiery, and entirely unsympathetic to your agile sprint. It doesn’t ask for trust ’cause it demands receipts.

And these guys are good at it.

And they ask for one thing. Explainability and auditability.

Explainability used to mean telling a pretty story about what your model might be doing. Auditability means proving exactly what it did, with timestamps, signatures, model IDs, and highly detailed data lineage that always makes your data manager cry.

If you’ve been involved in an AI project you’ve certainly gotten this call “Um, uh, I heard you’re the AI project manager? Right. We’d like to know why your algorithm denied this applicant a mortgage”.

You start sweating. Because you don’t know. Because the student who trained version 14.6.2 is now a crypto influencer somewhere in Bali, and maybe also because your documentation folder is named “final_final_realfinal.zip”.

But them regulators – I call ’em regurgitators because they don’t create anything new, they only chew up innovation, spit it back out covered in legal slime, and feed it to me while keeping close watch (and calling it “guidance”). They want the chain of custody from dataset to decision. They want to know who touched the data on 2025-08-12, which hyperparameters were tweaked, and who approved deployment, and if (god forbid) you have missed a step . . . enjoy a seven-percent-of-global-turnover fine. That is €35 million for a billion-dollar firm.

If only you’d get Airmiles on those “compliance taxes”.

Across the Atlantic, the U.S. Federal Trade Commission is playing the bad cop, and it has decided it is done watching you AI clowns juggle buzzwords. Their new hobby is vaporizing anyone who dares say to “AI” without evidence. Remember DoNotPay? Neither did I, but when I searched the web for “AI compliance fines” this one popped up as if it was ‘paid promotion’. This was the startup that called itself the world’s first robot lawyer.

Ah, now you remember it!

It promised to help people fight traffic tickets and small claims cases automatically, except that it couldn’t. Their “AI” was as smart as a note.txt but it was better marketed, so the FTC penalized them them with a $193,000 fine for false advertising and pretending to be a real legal service. The crazy thing is that the website still works, and they even claim to have close to 9 million users, including someone called Tyrone Davis from Jacksonville who saved $150+ on a parking ticket. 👇

Then there was Automators AI, which sold “passive-income AI e-commerce empires”, whatever that means. And the FTC labelled them as a pyramid scheme wearing a hoodie. They convinced people to pay for “AI-run online stores” that didn’t actually run – nor sell – anything. The FTC hit them with $21 million.

So, in our day ‘n age, this thing called auditability isn’t optional anymore. A defensible AI system means obsessive traceability where every dataset has to be versioned, and every experiment logged, with every decision timestamped. So, if your data pipeline doesn’t have a black-box recorder, you’ll be the star of a FTC press release.

And the thing is that compliance costs are ugly bastards. For every coin you spend on your AI toy, you can add two more for governance, plus 10% for sheer paranoia. You’ll need a full-time lawyer to translate the EU AI Act into human speech, a privacy officer to cry about data lineage, a governance manager to write policies (that nobody reads), an ethical AI frameworks expert to explain the ever present bias in your PowerPoint decks, and just because chaos loves company, you also need a documentation necromancer to resurrect the missing version history you deleted when you got drunk after receiving their invoices.

You’ll hate it. Your CFO will of course faint. But in the end, it is still cheaper than hiring an army of lawyers after your chatbot accidentally discriminates against half of Europe.

The new frontier is audit-driven survival. And the battlefield is fought with policy PDFs and unplanned resignations. In short, compliance, explainability and auditability are the main antagonists of your AI project.

But I have some splendid news for all-o-yous and your AI projects! Because – as it turns out – we won’t have to worry about the FTC no more.

Looky looky 👇


Middle management panic simulator 3000

Technology doesn’t kill projects. Middle managers do. Specifically, the ones who hear “AI transformation” and translate that to “my job just became obsolete”. You gotta know that these folks are trapped between the automation guillotine above and the rebellious employees below. Not a favorable position, I agree. So, to delay the inevitable, they’ve mastered the art of passive resistance. They will attend every AI training session, they’ll nod thoughtfully, then go back to their teams where they quietly say “Let’s wait until this blows over”, and just do NOT volunteer to go first, I warn you.

Their fear is rational.

A 2025 study said that 64% of employees fear AI will make them irrelevant. But managers are even worse off, because 44% expect lower pay, 41% expect layoffs. And they’re right, since nearly half their routine work is on the chopping block. AI eats reporting, scheduling, approvals, all the tiny rituals that made middle management feel powerful, and especially this new technology I talked about earlier.

And no, it’s not he fancy “Agentic AI” stuff that is going to cause their extinction, because that will not be ready for the enterprise any time soon. No, the real grim reaper is going to be an algorithm called Browser Use, and it’s a polite little psychopath I dissected in The Post-Human Back Office blog on LinkedIn. This thing clicks, scrolls, fills forms, and never stops, and it devours tasks. HR, Finance, Admin, doesn’t matter. And unfortunately this will come soon. Not in ten years. But in one.

So yeah, their fear isn’t misplaced. It’s just there, early.

So they fight back in silence. They claim “data quality issues”. They delay pilot access. They write 60-page risk memos nobody reads. I call it ‘death by paperwork’ cause it leads to lots of deadly papercuts. The tragedy is that you need them. They are the translators between vision and reality, so if you choose to ignore them – or go over their heads – your “enterprise transformation” becomes a $3-million prototype that is gonna be collecting dust.

And I’ve found the cure and it is not conducting another town hall. It is called empathy, but with an edge.

Just hear me out.

Buy them lunch. No, not to bribe them (though that might work in some cases – I don’t judge – cause the end justifies the means), but to ask them what they hate about their jobs. Everyone has bits of their jobs they hate. For instance, mine is having to pamper people too much during these transformational projects. But I learned to listen like their fear is data, and it is in fact. Then weaponize reverse mentoring.

Pair them with the employees, interns, or even students (is what I do), who actually know how to use AI responsibly. The old guard provides wisdom, the young ones provide working shortcuts. Together they build fragile, weird trust.

McKinsey’s framework is a good one. It says Define, Design, Equip, Communicate, Measure or something ‘consultative’ looking, along those lines, and i finally earns its pay here. Define what “better” means. Design the org chart around it before you touch code. Equip people with real support, not e-learning videos narrated by a monotone HR bot. Communicate honestly. Measure adoption, not installations.

And if you find it hard to remember, I have turned it into a mnemonic.

Here you go.

Desperate Directors Enable Corporate Massacre

Because just flaunting dashboards at them, don’t fix morale.

Middle management isn’t obsolete (it’s actually mutating). The ones who adapt become coaches for hybrid human-machine teams. The rest become LinkedIn thought leaders by January.


Shadow data and other crimes against common sense

Every big company has at least one Slack channel where employees trade prompt hacks for their personal AIs, as if it’s Fight Club. That’s called Shadow AI. It’s the underground resistance against corporate IT – read: Shadow AI isn’t rebellion it’s office survival | LinkedIn – and it can be a killer to your next AI-project.

How? Read on . . .

It all starts innocently. Someone copies a sales dataset into ChatGPT “just to summarize”. Then marketing dumps the customer list into Canva AI for “creative inspiration”, and soon after, half your intellectual property is living rent-free on servers you cannot audit.

You can’t stop this with threats though. I know you all tried.

Block one tool, and five new ones appear. And before you know it, you’re playing whack-a-mole. People want convenience more than they want compliance. That’s nature. And the solution is quite straight forward, make the safe way easier than the risky one. Build internal sandboxes with approved APIs. Give them a playground before they build one under your nose.

But if you can’t because of some ‘regulation’ or simply because taking a decision is too difficult in your organization, just take out your bible and try exorcism, but I’m telling you – it will be your head that’s gonna make a 360 and projectile vomiting.

And to make your situations worse, your software vendors sneak AI into everything, and they activate it without your consent. Try to keep Microsoft Copilot Chat (the free version) from entering your organization. It just keeps popping up its ugly head like a zit on an adolescent. And before you know it, your CRM now “predicts churn”, your HR system selects the best candidates based on nothing more than “vibes”, and your presentation software “auto-writes insights” about your corporate strategy. What it really does is hoard your data, acquire telemetry, use that to retrain the model, and occasionally sell it off to paying customers (they call it ‘selected partners’). And you’ll never know what’s really going on with your data, because the model and the architecture are “proprietary”.

This is how bias scales in your organization when you don’t control the virtually uncontrollable.

A sales bot over-targets high-income males because the dataset said they’re “low churn”. A résumé screener downgrades foreign names, a credit model punishes single parents because their “spending patterns deviate” too much, a performance evaluator downgrades remote workers because their webcams don’t smile enough, a pricing algorithm charges neighborhoods differently based on postal code “risk”, and a fraud detector red-flags small merchants who don’t fit the statistical mold.

The fix for this mess is boring – yeah here you go – governance frameworks, periodic bias audits, and a permanent “Responsible AI” team whose job is to ruin marketing’s optimism. So yeah, even though they are a killer to your project, you need internal watchdogs who log every variable like their pension depends on it. Because it does, when you live in the EU.

Success means shipping without lawsuits. Measure adoption, satisfaction, and revenue impact. If your AI project can’t tie itself to real numbers, it’s only a demo but with better branding.

And let’s not forget reproducibility.

You should be able to resurrect any model, any decision, any day. Your audit trail should become a crime novel where every clue, timestamp is mentioned and there areno missing pages. The companies that manages to survive the next wave of regulation will be the ones who treated documentation like religion.

Sigh.


How to herd cats through fire drills

So, now you know your project’s on fire, but what is the plan.

You panic, pray, pivot?

None-o-that, mate.

You start with policy.

You stop all builds until governance exists on paper and it is written in blood. Mostly yours, but that depends on your ability to sway (threaten) your project team.

How do you go about building this paper-pushing, lip-syncing machine.

It happens in three boring phases. . .

Phase one → assemble the holy trinity. Get legal, IT security, and data privacy at the table. Lock them in a room until they agree on an AI governance approach. That’s your bible now. No project touches production without it.

Ok, to be honest, you’ll probably have to do most of the grunt work yourself, because often times you’ll find those ‘specialist’ never heard of the stuff you want from them anyway. For inspiration, just search for an AI governance framework. There are a couple of ‘em like, OECD AI Principles, something from NIST I never used, the Ethics Guidelines appendix of the EU AI Act, and ISO 42001. I must say I like Google’s AI Principles + Model Cards Framework myself, and for government use/NGO I prefer IAMA (short acronym for very a long title), but ultimately it is you that needs to feel comfortable with the tool.

And yes. It is a tool. It’s not scripture. Remember that.

Then go find your scared managers and recruit them as your allies. Offer them survival, not salvation. Pair them with younger staff who understand the tech but not the politics. Call it “reverse mentoring” or whatever you fancy. Whatever you’re calling it, it’s actually cultural chemotherapy.

Phase two → pilot small. Pick a energy-sucking process like expense approvals, or invoice matching (yawn), you know, something nobody will mourn if it dies. Build the audit trail from day one. Version every dataset. Record every decision. Communicate everything. When it works, brag modestly. When it fails, publish the autopsy.

Transparency builds trust.

Phase three → scale with structure. Establish the AI Governance Committee as a stage-gate process and not a collection of processes. No model goes live without sign-off. Period. Implement continuous monitoring using platforms like Arize or Truera or the ones built in in Azure ML (model monitoring), AWS (Sagemaker model monitoring/clarify) and Google (Vertex model monitoring).

Invite auditors early, they’re less scary when you get them in first, not just before launch.

And after all this talk, by month twelve, AI becomes normal.

Yes, it takes this long.

And you’ll see managers evolve from task supervisors to system coaches. Every employee learns basic AI literacy. Performance reviews now include “responsible automation contribution”.

Blessed are the compliant for they shall inherit the future (Marco 5:5).

I have learned myself to treat compliance like it’s oxygen. Sure, it slows you down – a lot. But breathing’s nice. Skip it, and you’ll be the next press release, written by regulators who need a new trophy.

The future belongs to the paranoid. The ones who assume every model will fail, and every dataset is biased. Every success will be audited.


The exorcism ritual we bias mitigation

There’s a sound I can’t unhear. It is the hollow applause you hear at a corporate town hall meeting after someone says “AI is a journey, we take together”. No, Karen, it is not a journey, it’s a haunting. Every company is an old house filled with ghosts of past projects, and HR keeps on hiring exorcists who bill by the slide.

They call it transformation. And I call it collective possession.

Somewhere between the CEO’s “strategic vision” and your own late-night “prompt engineering workshop”, your entire workforce got caught in a séance with a language model whereby nobody is really leading and everyone’s pretending to be the medium.

And then there’s HR, sharpening their PowerPoints like they’re silver crosses. They’ve discovered “AI literacy”.l Which means you now have an extra twelve mandatory training sessions where an underpaid AI “consultant” teaches you how to write “effective prompts” while the real demons with things like bias, drift, bad data are laughing at you from the haunted basement.

We all know HR doesn’t want AI literacy.

They want HR literacy for AI. They want control – over hiring, over compliance, over whose emails get flagged as “low empathy”. Their new favorite phrase is Responsible AI, and with it they’re basically admitting that they’re mighty terrified of the regulator but even more terrified of being left out of the meeting.”

A 2025 survey found that 72% of companies now have some form of “Responsible AI principles”.

Yet, only 9% have actual enforcement.

The rest have PDFs full of clip-art ethics. I’ve seen one that literally listed “Don’t be evil” as a bullet point, like Google in therapy. Another talk about “algorithmic empathy”, and that sounds like a psychotic episode caught in code.

And the real compliance work sits in a forgotten folder named “AI-docs-v7-backup.zip”. It contains fifteen PowerPoints, a README nobody read, and one spreadsheet tracking “ethics KPIs” and a Fitbit for tracking guilt if you haven’t met any of ‘em.

What ethics exorcism (try pronouncing this three times, quick) actually looks like is this.

You build an “AI Ethics Committee”. Everyone claps. Then you realize it meets once a quarter, takes no minutes, and has the authority of a horoscope. You sigh. Than the CTO shows a demo. The legal team mumbles about liability. The CISO pretends to look interested. The meeting ends with an action item: “Schedule next meeting.”

In other words, nothing happens, but responsibly.

And you’re not alone. Even the class valedictorian of virtue – we all know one – the one who turned in a 5,000-word essay on Ethics with a capital E, even he, managed to trip over his own halo. I’m talking about an earlier post about Amsterdam’s Ethical AI fairy tale went spectacularly tits up on LinkedIn.

Then there’s the HR paradox.

They love AI for recruiting until the algorithm learns their dirty secret, that half their hiring decisions are based on vibes and alumni networks. When the machine starts rejecting résumés from people named “John” because statistically Johns underperform, HR freaks out.

Suddenly it’s bias. Suddenly they rediscover humanity.

Bias audits, by the way, are the new religion.

Everyone runs them, few understand them. You’ll see proud posts on LinkedIn about “bias-free models”, usually accompanied by graphs that look like modern art, but hear me out. . . the truth is uglier. No dataset is ever neutral. Every click, label, and annotation carries a whiff of the past.

You cannot detox from an overdose of bias. You can only try to contain it.

I once sat through a “bias remediation workshop” led by a consultant who introduced himself as an “AI empathy coach.” His entire session consisted of asking the model how it felt about different demographic groups. The model said, “I don’t feel emotions”. He nodded like a guru achieving enlightenment and declared the system unbiased.

Total noob. But that’s corporate AI for you.

Now layer on the regulatory angst.

The EU AI Act wants conformity assessments and human oversight logs. It wants your AI to check in like a DUI parolee with an ankle bracelet and ignition interlock. And the FTC’s Operation AI Comply will fine you for calling your chatbot “intelligent” if it can’t prove it.

So HR does what HR does best, it panics professionally. They hire “AI Compliance Officers” with fluffy LinkedIn bios with lots of ™, © and®’s in em . They print posters drawn up by a sketch artist stating that “Ethics is everyone’s responsibility” and then forget to check if the printer used copyrighted stock photos.

Behind the curtain, nothing changes.

AI gets bolted onto old workflows the same way I put duct tape on a leaky pipe. Job descriptions still read like “Seeking visionary data wizard to drive synergy through algorithmic excellence”, but what should actually happen is boring but necessary. You need to build an AI compliance architecture yes I made it bold.

This is plumbing.

It is unseen, not sexy, but very vital. It is the place where every decision gets logged, and every dataset has a version. This architecture is actually a preventive maintenance algorithm for your reputation.

And yes, it kills speed. But so does a €35-million fine.


A love story written in blood and invoices

Now we’re at the end of two bloody parts of how your AI projects can get torpedoed.

It always ends the same way.

With a meeting invite. Subject line “Post-incident review”.

You dare not open, but still you click it, knowing full well you’re about to spend two hours pretending to be surprised that the AI project exploded exactly as you predicted.

The universal truth in tech is that every innovation eventually becomes paperwork. Compliance is the afterlife of ambition. The place where good ideas go to atone.

The AI journey you undertook, is now a battlefield covered in receipts by now. Every pilot left behind a trail of documents, and versioned Git branches that your will eventually be dug up by your successor as archaeological evidence of your hubris. The auditors arrive like paleontologists, with their tooth brushes to wipe away the dust that hides the skeleton of your intent. These people will never find “transformation” of course. They find “traceability”, “explainability”, and “please don’t fine us” post it notes.

You thought AI would make you efficient, but your compliance officer turned you into an archivist with PTSD.

But hear me out.

I have found a good rationale for me to spend more time on compliance. Because you’re not only working your ass off for the government, but also because chaos unlogged is chaos repeated.

Documentation is pain now, but salvation later. The audit trail tells the story of why your shit went wrong so you can at least fail better next quarter.

I’ve seen this up close. A fintech firm lost $12 million in fines because their decision engine couldn’t explain itself. Another, same industry, spent $2 million on compliance infrastructure before launch and walked away untouched when regulators came knocking.

The difference = paranoia.

You can buy speed or safety. Not both. Pick one. And for the love of quarterly survival, pick safety.

Let’s talk structure now. This is the part nobody funds until they’re already bleeding. You need an AI Compliance Officer, full-time, not the overworked lawyer who already handles privacy as I see all my clients do when I first get there. And these people need teeth, budget, and veto power. If your compliance officer can’t say “no” to the CTO, you don’t have compliance, but you will have a hole worth millions to explain.

And yes, you also need a Governance Committee. And it needs to be cross-functional – no not the data analyst, the solution architect, and the scrum master, but cross-departmental, and cross-tempered. They meet monthly to review high-risk systems. They speak spreadsheet and prophesize trauma. The word they will frequently use is “evidence”.

You also need a Responsible AI team embedded in development. And for the love of anything holy, their job isn’t to slow you down. It is only there to keep you out of the headlines. They wear lab coats and run bias tests, track drift, and document model lineage. They are in fact your internal CSI, though it is a widespread delusion that anyone in this job ever looked cool in sunglasses.

And – lest we forget – you also need Model Risk Management.

This is the kind that re-validates everything before it touches production. Their motto is trust, but verify, then verify again – and preferably written in Latin and printed on their shirts as if it’s their motto†

They’re unpopular, yes, but so are seatbelts.

This all sounds exhausting, but there’s no other way.

And adaptation starts with brutal honesty. Stop lying to yourself about “AI maturity” because you don’t have an AI strategy. Most organizations that I speak with have a handful of pilots, some half-documented workflows, and a Slack channel named “#future-ideas” or whatever.

Just call it what it is, experimentation with delusions of grandeur.

Then start small and mean it.

Audit one process end to end. Log everything. Force traceability into your DNA. Set up post-market monitoring that actually runs. Report drift. Admit bias. Tell regulators what you fixed instead of pretending you had nothing to fix. Transparency is protection.

I remember a CEO once asked me, “What’s the ROI on compliance?” I told him, “It’s staying in business”. He laughed. I didn’t. Okay, I did a little.

The hidden cost of compliance is your sanity. Endless documentation. Reviews. Risk matrices. The sheer fatigue of proving you’re not evil. But that exhaustion is cheaper than public shame. The alternative is front-page headlines, frantic lawyers, and an inbox full of “urgent” from PR.

Te saluto. Vale,

Marco

👉 Fide, sed verifica, deinde iterum verifica


I build AI by day and warn about it by night. I call it job security. Big Tech keeps inflating its promises, and I just bring the pins and clean up the mess.


👉 Think a friend would enjoy this too? Share the newsletter and let them join the conversation. LinkedIn, Google and the AI engines appreciates your likes by making my articles available to more readers.

To keep you doomscrolling 👇

  1. I may have found a solution to Vibe Coding’s technical debt problem | LinkedIn
  2. Shadow AI isn’t rebellion it’s office survival | LinkedIn
  3. Macrohard is Musk’s middle finger to Microsoft | LinkedIn
  4. We are in the midst of an incremental apocalypse and only the 1% are prepared | LinkedIn
  5. Did ChatGPT actually steal your job? (Including job risk-assessment tool) | LinkedIn
  6. Living in the post-human economy | LinkedIn
  7. Vibe Coding is gonna spawn the most braindead software generation ever | LinkedIn
  8. Workslop is the new office plague | LinkedIn
  9. The funniest comments ever left in source code | LinkedIn
  10. The Sloppiverse is here, and what are the consequences for writing and speaking? | LinkedIn
  11. OpenAI finally confesses their bots are chronic liars | LinkedIn
  12. Money, the final frontier. . . | LinkedIn
  13. Kickstarter exposed. The ultimate honeytrap for investors | LinkedIn
  14. China’s AI+ plan and the Manus middle finger | LinkedIn
  15. Autopsy of an algorithm – Is building an audience still worth it these days? | LinkedIn
  16. AI is screwing with your résumé and you’re letting it happen | LinkedIn
  17. Oops! I did it again. . . | LinkedIn
  18. Palantir turns your life into a spreadsheet | LinkedIn
  19. Another nail in the coffin – AI’s not ‘reasoning’ at all | LinkedIn
  20. How AI went from miracle to bubble. An interactive timeline | LinkedIn
  21. The day vibe coding jobs got real and half the dev world cried into their keyboards | LinkedIn
  22. The Buy Now – Cry Later company learns about karma | LinkedIn

Leave a comment

Channels

About thIS site

This is my portfolio. A glimpse into the things, and stuff that define what I do.

Get updated

Subscribe to the TechTonic Shifts newsletter and receive news whenever it’s important.

Go back

Your message has been sent

Warning
Warning
Warning.