By

Europe’s new hobby is stacking laws until the robots give up

If you were hoping that 2025 would be the year AI governance became simple, clean, or – God forbid – coherent, please close this tab, take a walk, and touch some grass.

Because Europe has done what Europe does best, that is to open the bureaucratic Matryoshka doll and reveal yet another, smaller, angrier doll inside.

First we had GDPR, the “don’t steal people’s data unless you want your soul sued out of your body” law.

Then came NIS2, the cybersecurity wet blanket.

Then the EU AI Act, which is kinda like the CE-marking manual rewritten for robots instead of teapots.

And now. . .

Because three layers of regulation weren’t enough, the Council of Europe dropped a fourth, the “Framework Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law”.

Yes. That’s the full name.

It’s so long it needs its own seatbelt.

We have reach the point of regulatory inception, where every AI rule is wrapped inside another rule, wrapped inside another rule, until the only ones who can still navigate the system are Brussels technocrats who haven’t touched a real computer since Windows XP.

But somehow… this insane nesting doll actually reveals the deeper truth of our era, that AI isn’t a technology problem alone. It’s a civilization problem. And Europe is building a legal exoskeleton around it.

And in this blog, I will peel back the layers of the dolls.

One by one.

And marvel at the beautiful, terrible, bureaucratic grotesquery.

So, let’s open the dolls now. Gently. Or violently. Whatever works.


More rants after the messages:

  1. Connect with me on Linkedin 🙏
  2. Subscribe to TechTonic Shifts to get your daily dose of tech 📰
  3. Please comment, like or clap the article. Whatever you fancy.

Doll #1 GDPR. The grandmother who still hits you with a wooden spoon

The GDPR was the first doll in the stack, the matriarch so to speak. It is the one who looks at Facebook like your disappointed parents look at you on Christmas eve. It came from an era when “data protection” was still something you could fix with a firewall and a stern warning, but GDPR wasn’t a law alone. Nope-sure-ee, it was pretty much a cosmic event because it forced every company to finally admit they had no idea what personal data they were storing or why. It made people think twice before installing a random app that demanded access to their location, microphone, blood type, browser history, and favorite pizza topping. And even now, years later, GDPR is still the foundational principle of Europe’s regulatory religion that data belongs to the people, not corporations.

It’s also deeply, hilariously insufficient for generative AI.

Back when GDPR was being drafted, the biggest threat to privacy was Cambridge Analytica† and Mark Zuckerberg’s change of haircut. Now we have models that can recreate your face from a reflection in a spoon, or create a deepfake of you applying for a loan you didn’t know existed. GDPR laid the groundwork, but the world outgrew it, mutated past it, leapt over it. Europe needed something sharper.

CA was running the largest ever data- and behavioral-targeting scandals in political campaigning, and its techniques were used by the 2016 Trump campaign.


Doll #2 NIS2. The exasperated parent screaming “Can you please secure SOMETHING?”

NIS2 arrived in our work-lives like your exhausted dad who walked into your room when you were a teenager. He saw the chaos, and immediately starts throwing away the moldy food-plates. It’s the moment Europe collectively realized that half of its critical infrastructure still runs on Windows Server versions that should be considered archaeological artifacts. But unlike the current laws, NIS2 wasn’t interested in human rights philosophy or ethics. It was interested in not letting ransomware gangs from random corners of the internet shut down entire hospitals because someone clicked a dancing-Weiner email.

It wanted you to be hygienic. Real hygiene. Not the kind where you spray Febreze on a problem and pray. And suddenly all companies in the EU had to patch systems and behave like they were responsible adults with actual obligations to society. But, with all its grumpy energy, even NIS2 couldn’t contain AI. Yeah, you can lock down servers, enforce risk assessments, and mandate audits, but you still can’t stop some evil genius from deploying an AI-enabled autonomous decision engine built on a dataset curated by a drunk DBA.

NIS2 controlled the infrastructure, but it couldn’t control the mind of the machine.

So, the next doll had to go deeper.


Doll #3 The EU AI Act. The manual for raising a robot without creating a monster

When the EU AI Act arrived, it was that clear Europe had had enough. Enough of “move fast and break things”. Enough of “training data is just whatever we found lying around”. Enough of AI systems deployed with the same thoughtful care as a my Weiner swinging a chainsaw. And it also apparently had enough of “unbridled” innovation. The EU did what they always do, in sight of a new revolution, which is launch another rule-book. They birthed the EU AI Act and it took a long and good look at the industry and said, “Sit down. You’re grounded. And you’re doing this my way”.

My way being “no way”, that is, because it stifled Europe’s ambitions from day one.

The Act tried to classify AI systems like they were fireworks. Some are safe, some need supervision, and some will blow your face off if you sneeze near them. It also established categories, obligations, and documentation requirements that are so suffocating that even seasoned compliance officers looked at them and said, “I’m not paid enough for this”.

Ever tried reading the entire AI Act?

Me neither. I’ll stick to ChatGPTs summary.

It is the engineer’s law.

The product manager’s law.

The compliance officer’s law.

And it demands audits, technical documentation, robustness testing, risk assessments, dataset quality checks, and transparency requirements that force everyone and their mothers (if they’re into AI as well) to remember how their models actually work. It was Europe that was like, saying, “We don’t trust you, and history proves we’re right”.

Yeah, right. A history of inertia that is. . . I say “Acta non Verba!” but try to explain that to a technocrat.

The AI Act had one fatal limitation though, it stopped at the border of the EU. But AI, well, it doesn’t.

So – sigh – Europe needed something bigger.

A treaty.

A global spiky mace.

Enter doll #4.


Doll #4 The Framework Convention. The new Geneva convention nobody heard of

The Framework Convention is where Europe stared into the AI abyss, snapped, and said. . . “uh, is the teleprompter working?”, then cleared his throat like a bureaucrat who slept through the briefing and muttered,um, ok, so let’s do it from memory, uh, “Human rights are not optional, you mechanical goblins”.

The AI Act is obsessed over technical compliance, but this Convention takes a more primal route. It states that AI could not be allowed to interfere with human dignity, equality, autonomy, democracy, or the rule of law without facing severe consequences.

Contrary to the AI Act, It wasn’t written by coders or product managers. It was written by the same people who draft treaties about torture, genocide, and fair trials (ICC, Netanyahu?). That’s how serious Europe thinks AI is. And that should terrify you in the best possible way.

This treaty doesn’t care how fancy your architecture is. It doesn’t even give a damn about your novel inference pathway or your groundbreaking LLM agent swarm. But what it does care about is whether the tech harms people.

Does it discriminate? (ahem).

Does it profiles citizens like livestock? (uh-huh more uh-huh).

Does it manipulates voters?

Does it turn into a biometric panopticon? (hahahaha, sure).

Does it amplify injustice while CEOs clap on stage at conferences (yup).

If the answer is yes – or “kind of” – or “well, technically…” – the Framework Convention is coming for you with the medieval force of European human-rights law behind it.

But unlike the AI Act, this treaty ain’t limited to the EU’s cosy 27-country bubble.

Oh no.

This thing binds 46 countries across the entire Council of Europe. That includes democracies that aren’t in the EU, like Norway, Switzerland, Iceland, and – yeah – even the UK, whose political class can barely regulate their own lunch orders but will still be dragged into this because human rights obligations don’t vanish after Brexit. It’s also open to countries outside Europe entirely. The United States signed it as well with a big “S”. Of course, Canada joined. Yes, Japan tapped in with a whole keiretsu. And even Mexico showed up with a bottle of Corona. It is the first global legally binding AI treaty, held together by duct tape, and shared terror.

But the magic really kicks in, only when it is ratified by a country.

Signing is apparently not enough (have to remember that when the IRS asks me to swear I didn’t manipulate my income tax).

This isn’t like the AI Act, where once Brussels stamps it, you’re stuck with it. Human-rights treaties apparently operate like nuclear launch codes. The moment a minimum number of states ratify them, they detonate into binding law across the system. For the Framework Convention, the threshold is laughably small with like five ratifications, of which three must be Council of Europe member states.

Five. That’s it.

And once those signatures land, the treaty enters into force with a shockwave rolling across the continent.

When that happens, every ratifying country must update its national laws to enforce it. Governments must obey it, and courts must apply it. Corporations must also comply with it. And regulators gain new powers through it. Surely activists will use it. Undoubtedly, lawyers will feast on it. And if your AI deployment touches human rights in even the tiniest, most pathetic way, like, let’s say a hiring tool, a welfare scoring system, a police model, a school algorithm, a credit engine, an emotion detector, a biometric scanner, a chatbot that gaslights users into bad decisions – well, you suddenly find yourself inside the blast radius.


Doll #5. The ECHR. Humming quietly in the center

At the core of this matryoshka you find the quietly-menacing core of the entire European legal universe. It is the European Convention on Human Rights. The ECHR is not a policy document nor a guideline, and it’s certainly not a set of inspirational TED-Talk phrases about fairness, but instead it is a legally binding weapon. It is the operating system for democratic restraint across 46 states. It is the mother reactor powering everything else. It is the . . . . yeah, you get my drift.

This document was forged in the aftermath of the worst atrocities Europe had ever seen in WOII. It was built as a shield against the very real possibility that nations – given enough fear, power, or stupidity – will do unspeakable things to their own people. The ECHR is the reason that governments can’t decide to let a dissident disappear, or racially profile entire populations (unlike the Dutch Tax department), torture suspects in basements, or run national surveillance programs (heeey Palantir!).

But now something darker is happening. The horrors of the 20th century are being replaced by horrors of the 21st, who gets flagged as a threat, who gets stopped by police, whose child gets placed in care, who gets a job interview, who gets deported, who gets watched – get it? The new problems are delivered through machine-learning pipelines written by underpaid engineers working in offices with beanbags.

So the ECHR is being dragged – screaming, if it could scream – into the new century. Every right inside it is now being interpreted for an AI-saturated world. Article 8 on privacy now includes algorithmic surveillance, emotion recognition, digital tracking, bulk biometric harvesting, and whatever dystopian nonsense the next hackathon produces. Article 14 on discrimination applies when your algorithm decides that people with “foreign-sounding names” are a risk factor (hey Amsterdam!). Article 6 on fair trials applies when predictive policing tools poison the judicial process. Article 10 on expression applies when algorithmic feeds distort political dialogue or silence voices algorithmically.

In practical terms, it means that if your AI system or your credit-scoring engine, or your hiring model, even your sentiment classifier, or your public-sector decision tool, your biometric access scanner, your predictive policing toy, your whatever, if it violates human rights, you are dealing with a potential human-rights violation.

Then you’re dealing with national courts, constitutional courts, and – if you are truly unlucky – the European Court of Human Rights itself, where judges with zero knowledge of tech will carve your system apart.


The next doll is coming

This whole layered construction of laws is Europe’s mental coping mechanism for a world where AI moves like a sprinting dragon and the law shuffles behind it like a man trying to put on pants while already late for work. So to repel the threat, they stack rules like we stack dreams in Inception. GDPR guards the data. NIS2 keeps the wires from catching fire. The AI Act grabs the tech before it sprints into traffic. The Framework Convention steps in when real humans get hurt. And down at the very bottom, the ECHR hums away like the final dream level.

And yes, this is just the beginning.

Europe is nowhere near done.

Sigh.

It barely has any AI worth showing off on the global stage, but Europe absolutely sparkles when it comes to legislation. Because guess what happens when you pay tens of thousands of technocrats to sit together in glass buildings with bad coffee and unlimited policy-drafting permissions?

Yup.

They legislate.

Endlessly.

And with the enthusiasm of monks, but in their case the scrolls are 600-page PDFs about risk classifications and biometric oversight.

Europe might not build the AI of the future, but by God, it will regulate the hell out of whatever does.

So, when you think four laws were enough, no-no-nooo, my dear naive friend, we are only at the appetizer phase. There will of course be updates, directives, rulings, revisions, sub-frameworks, meta-frameworks, frameworks about frameworks, and probably a regulation that regulates how we regulate regulations. AI liability rules. AI copyright rules. AI safety regimes. AI worker protections. AI energy caps. AI cybersecurity certifications. AI sovereignty doctrines. The dolls will keep stacking until the structure becomes so tall and absurdly ornate that even the AI will look up in confusion and type “Mama?”

But the hilarious truth is that this Russian doll model is the only thing standing between humanity and full-blown techno-feudalism. It’s messy, and academic, and slow enough to give glaciers self-esteem, but it is also one of the most ambitious legal architectures Europe has ever attempted.

It’s a magnificent menace.

Signing off,

Marco


I build AI by day and warn about it by night. I call it job security. Big Tech keeps inflating its promises, and I just bring the pins and clean up the mess.


👉 Think a friend would enjoy this too? Share the newsletter and let them join the conversation. LinkedIn, Google and the AI engines appreciates your likes by making my articles available to more readers.

To keep you doomscrolling 👇

  1. I may have found a solution to Vibe Coding’s technical debt problem | LinkedIn
  2. Shadow AI isn’t rebellion it’s office survival | LinkedIn
  3. Macrohard is Musk’s middle finger to Microsoft | LinkedIn
  4. We are in the midst of an incremental apocalypse and only the 1% are prepared | LinkedIn
  5. Did ChatGPT actually steal your job? (Including job risk-assessment tool) | LinkedIn
  6. Living in the post-human economy | LinkedIn
  7. Vibe Coding is gonna spawn the most braindead software generation ever | LinkedIn
  8. Workslop is the new office plague | LinkedIn
  9. The funniest comments ever left in source code | LinkedIn
  10. The Sloppiverse is here, and what are the consequences for writing and speaking? | LinkedIn
  11. OpenAI finally confesses their bots are chronic liars | LinkedIn
  12. Money, the final frontier. . . | LinkedIn
  13. Kickstarter exposed. The ultimate honeytrap for investors | LinkedIn
  14. China’s AI+ plan and the Manus middle finger | LinkedIn
  15. Autopsy of an algorithm – Is building an audience still worth it these days? | LinkedIn
  16. AI is screwing with your résumé and you’re letting it happen | LinkedIn
  17. Oops! I did it again. . . | LinkedIn
  18. Palantir turns your life into a spreadsheet | LinkedIn
  19. Another nail in the coffin – AI’s not ‘reasoning’ at all | LinkedIn
  20. How AI went from miracle to bubble. An interactive timeline | LinkedIn
  21. The day vibe coding jobs got real and half the dev world cried into their keyboards | LinkedIn
  22. The Buy Now – Cry Later company learns about karma | LinkedIn

Leave a comment

Channels

About thIS site

This is my portfolio. A glimpse into the things, and stuff that define what I do.

Get updated

Subscribe to the TechTonic Shifts newsletter and receive news whenever it’s important.

Go back

Your message has been sent

Warning
Warning
Warning.