Why the "Decentralized" Internet Keeps Breaking

The internet feels global and unbreakable, but it’s quietly centralized behind a handful of infrastructure giants. When one of them stumbles, the modern web forgets how to find itself.

Why the "Decentralized" Internet Keeps Breaking

Most people don’t realize this, but the internet that feels global, distributed, and impossible to bring down is actually balanced on the shoulders of a few infrastructure giants. When Cloudflare sneezed, half the internet caught a fever. November's outage is just the latest reminder. Five major disruptions already in 2025. Every one of them exposes the same uncomfortable truth: we traded the ideal of decentralization for speed, free SSL certificates, one-click CDNs, and hands-off ops.

We wanted speed, we wanted free SSL certificates, and we wanted one-click CDNs. We got them. But the cost was fragility.

When a single DNS provider or cloud region buckles, it’s not just a few blogs that go offline. Payment gateways stop working. Logistics systems freeze. My smart doorbell probably stops recognizing my face. The internet doesn't "break", the servers are fine, the code is fine, but the internet simply forgets where things are.

The "Good Old Days" were actually terrible

I see a lot of people on reddit (or X, or Bluesky, or whatever we're using this week) romanticizing the early web. They act like the 90s and 2000s were this utopia of distributed resilience.

I remember that era. It wasn’t resilient. It was chaos. In 2005, a small company’s "data center" was a server rack sitting in a closet next to the break room. It was vulnerable to power cuts, heatwaves, and. my personal favorite - the office cleaner unplugging the main server to plug in a vacuum cleaner.

Back then, you didn't aim for "five nines" (99.999%) of uptime. You aimed for "let's hope the hard drive doesn't click." If you only had 9 hours of downtime a year, you were a wizard. 9 hours a month was more realistic.

The UK’s DVLA still has parts of their website that are only open during office hours. That is the hangover of the "good old days."

The Devil’s Bargain

We fixed the vacuum cleaner problem by outsourcing everything to the cloud. We stopped building infrastructure and started building dependency stacks. We handed the keys to AWS, Google, Azure, and Cloudflare. In exchange, we got sleep.

But the nature of failure changed.
Then: Failures were frequent, but local. (My server died).
Now: Failures are rare, but catastrophic. (The internet died).

The internet isn’t centralized at the protocol level - TCP/IP is fine. It’s centralized at the convenience level.

The Knowledge Gap

The scarier part isn't just that the servers are centralized, it's that the knowledge is vanishing.

We have raised a generation of engineers who treat the cloud like a utility box they aren't allowed to open. "Click-ops" is the standard. You drag a repo into a window, click "Deploy," and magic happens.

Ask a mid-level dev today to explain the handshake between a browser and a server, or how DNS propagation actually works, and you’ll often get a blank stare. They know how to consume the API, not how the pipes are laid.

It’s gotten worse with AI. Today, you ask ChatGPT how to set up a load balancer, it spits out a block of Terraform or YAML, and you paste it in blindly. It works, so you move on. But you didn't learn why it works.

We are building complex systems using black-box instructions. When the abstraction leaks, when the "one-click" solution breaks and ChatGPT runs out of suggestions, nobody knows how to fix the plumbing anymore. We’ve forgotten how to be mechanics because we’ve spent the last decade just being drivers.

The "Decentralized" Myth

Here is the engineering reality check: The internet was never as decentralized as the idealists claim.

  • DNS: It’s a hierarchy. ICANN holds the root.
  • HTTPS: It relies on your OS trusting a specific list of Root CAs.
  • Routing: Depends on a handshake of trust between ISPs.

We are running on a distributed network that relies on central sources of truth. When one of those sources (like a root DNS or a major CDN) has a bad config push, the "distributed" web has nowhere to go.

Boring is the new sexy

We keep building more apps, but we aren't building more infrastructure. We are just building taller penthouses on the same three skyscrapers.

The fix isn't some Web3 blockchain magic. The fix is boring, unsexy engineering.
It means:

  • Multi-CDN by default (because relying on one edge network is suicide).
  • Independent DNS roots (so we aren't all looking up the same phonebook).
  • Self-hosted fallbacks (a "break glass in case of emergency" static site).

But try explaining to a CFO that you need to double your infra budget to protect against an outage that might happen once a year. It’s a hard sell.

The Engineering Philosophy

The funny thing is we still talk about the internet like it is this mythical hydra with endless heads. Cut one off, two grow back. But anyone who has run real systems knows it behaves more like a very busy highway with three exits. When one closes, the entire city sits in traffic.

And this is where the philosophy needs to evolve. Real resilience is not a feel-good poster about decentralization. Real resilience is redundancy that feels expensive. It is discipline that feels slow. It is infrastructure that feels unnecessary right up until the day it becomes priceless. The outages will keep coming. The dependency stack will only grow taller. But the ones who take the boring path now will be the only ones still breathing when the next butterfly flaps its wings in a Cloudflare data center.

Nothing glamorous about it. But the internet was never saved by glamour. It was saved by people who understood the pipes.

Subscribe to Sahil's Playbook

Join Sahil Kapoor’s inner circle. Get instant access to every premium issue.
[email protected]
Subscribe
Mastodon