Why the “K8s is dying” meme keeps spreading, what teams are actually switching to, and why Kubernetes still quietly runs the internet.

“Kubernetes is dead.”
If I had a dollar for every time I’ve seen that headline on Hacker News or Twitter (sorry, X), I’d finally cover this month’s EKS bill.
Here’s the thing: Kubernetes isn’t dead. It isn’t even limping. It’s quietly orchestrating billions of dollars of production workloads for banks, retailers, and cloud giants. The only thing dying is developer patience when a startup spins up Istio for a blog with 500 monthly visits.
This isn’t new. Mainframes? Declared obsolete for 40 years still running Wall Street. Java? Pronounced dead every decade still powering Android and banks. Kubernetes is the same undead raid boss: nerfed a hundred times, still wiping teams in 2025.
So why does the “death” narrative keep trending? Because some teams are bailing. Some jump to serverless. Others try lightweight orchestrators. Sohail Saifi even went viral arguing tech giants are moving to five alternatives. But here’s the twist: those tools aren’t K8s killers. They’re wrappers, abstractions, or situational picks and most still ride on Kubernetes under the hood.
TLDR
Kubernetes isn’t dying. Teams are just:
- Adopting it without knowing why.
- Over-engineering workloads that fit on a VM.
- Rage-quitting after losing to YAML bosses.
- Switching to alternatives that solve different problems.
The “K8s is dead” myth
Every few months, someone drops the “Kubernetes is dead” grenade on Hacker News or Twitter (fine… X). The replies light up like a cluster under a DDoS:
- Skeptics: “Finally, we can stop wasting weekends debugging YAML.”
- Defenders: “Lol. Tell that to Netflix, Shopify, or literally every bank.”
Sound familiar? We’ve seen this movie before:
- “Mainframes are obsolete” → still quietly running global finance.
- “Java is dead” → still powering Android, Spring Boot, and half the enterprise world.
- “Kubernetes is dead” → meanwhile your grocery delivery checkout flow just spun through three pods before your receipt hit Gmail.
So why does the obituary keep trending? Because it feels true for devs burned by Kubernetes. If your only experience was wrestling kube-dns at 3AM or praying Helm charts would actually deploy, you want it to be over. Declaring it dead feels like justice.
But here’s the twist: your pain ≠ proof of death. Kubernetes hasn’t vanished it’s just matured into invisible infrastructure. Like Linux, it fades into the background, powering everything while being cursed by the few who touch it directly.
Why teams are bailing (real reasons)
If Kubernetes isn’t actually dying, why do so many companies claim they’re ditching it? Spoiler: it’s not Kubernetes’ fault. Most of these teams never had a Kubernetes problem they had a “we don’t know what we’re doing” problem.
1. Cargo cult adoption
Plenty of startups jumped on K8s because FAANG uses it. Never mind that FAANG also has armies of SREs and budgets bigger than your entire Series A. Copying Netflix without Netflix’s staff is like buying a Formula 1 car to deliver DoorDash.
2. Over-engineering small problems
Most apps don’t need the orchestration power of a battleship. If your entire stack fits on two VMs or a managed database, Kubernetes is overkill. I know a team that literally scrapped K8s for Docker Swarm. Why? They realized all they needed was a scheduler not the CNCF Avengers. After switching, shipping speed doubled.
3. The “vertical wall” myth
Yes, K8s has a learning curve. But it’s not a vertical cliff, as Octavian Helm pointed out:
“The learning curve is steep but not vertical. That’s just a crude exaggeration.”
The real issue? Teams skipped fundamentals. They dove straight into service meshes and operators before mastering Pods, Services, and Deployments. The cluster didn’t fail them their approach did.
Receipt: CNCF’s State of Cloud Native 2023 report shows Kubernetes adoption growing. The loud “we ditched it” stories are the minority usually teams who misused it.

Serverless vs Kubernetes (choose your pain)
Whenever someone declares “Kubernetes is dead”, the next line is usually: “We switched to serverless and never looked back.” Sounds great, right? No nodes to patch, no kube-proxy meltdowns, no YAML that looks like cursed scripture. Just functions that scale like magic.
Except… serverless doesn’t actually remove complexity. It just shoves it under the floorboards.
Cold starts? Still there. Vendor lock-in? Worse than ever. Pricing? Enjoy explaining your Lambda bill when one hot endpoint runs 100k times an hour. And good luck debugging a distributed system where the “infrastructure” is hidden behind black-box magic.
Kubernetes has its own flavor of pain yes, YAML, networking weirdness, and tuning autoscalers. But at least the abstraction is open. You can peek under the hood, dig into the docs, even patch the system yourself. With serverless, you’re at the mercy of your cloud provider’s secret sauce.
It’s not a matter of easy vs hard. It’s a matter of choose your poison:
- Serverless → vendor lock-in, cold starts, hidden complexity.
- Kubernetes → steeper learning curve, more knobs, but transparent and community-driven.
As Mostafa Aboubakr put it:
“Serverless is way worse than K8s… At least in Kubernetes the abstraction is open and understood.”
Receipt: AWS Lambda performance docs. Cold starts are still a thing.
So no serverless hasn’t killed Kubernetes. It’s just a different boss fight. Pick the one whose mechanics you’d rather grind.
Why K8s still dominates quietly
If Kubernetes were truly “dead,” you’d expect the big players to be writing eulogies in their engineering blogs. Instead? They’re posting about scaling Kubernetes to thousands of nodes, running multi-tenant clusters, and automating upgrades at ridiculous scale.
Look at Shopify. Their engineering blog openly documents how Kubernetes powers their platform. Netflix? They’ve been deep in container orchestration for years. Airbnb, Spotify, banks, telecoms, governments the list of industries that rely on K8s reads like a who’s who of infrastructure.
The reality is simple: Kubernetes is boring infrastructure now. That’s not an insult it’s the ultimate compliment. Linux went through the same transformation. Once “edgy,” now invisible. Nobody debates whether to use Linux for production servers you just do. Kubernetes is sliding into that role for orchestration.
The reason people think K8s is dying is because it’s fading from developer conversations. Most of us don’t run clusters by hand anymore. Managed services (EKS, GKE, AKS) or platforms built on top of K8s handle the heavy lifting. It’s less visible, but more dominant than ever.
Receipt: The CNCF landscape shows an ecosystem still exploding with K8s-related projects operators, Helm charts, service meshes, observability stacks. Dead tech doesn’t spawn this kind of growth.

The real problem: cargo cult adoption
So if Kubernetes isn’t dead and isn’t going anywhere, why does it leave so many devs bitter? Easy: a massive percentage of teams never needed it in the first place.
Call it what it is: cargo cult adoption.
“We’re a serious startup, so we need Kubernetes.”
No… you need users. And maybe uptime. Kubernetes doesn’t magically give you either.
I’ve seen teams with a couple hundred active users spin up full-blown clusters with Istio, Prometheus, and custom operators then spend six months tuning networking instead of shipping features. They could have run their entire product on Heroku for $50 a month. Instead, they were burning thousands and bragging about YAML manifests like it was a badge of honor.
The real complexity here isn’t just technical. It’s cultural. Once a company adopts K8s, suddenly you need infra specialists, on-call rotations, and hiring pipelines to support it. That overhead crushes small teams. It’s not that Kubernetes “failed” them it’s that they copied the tooling of giants without the staff or context to make it work.
A friend put it perfectly:
“Adopting Kubernetes for our SaaS was like hiring a Michelin-star chef… to make instant ramen.”
That’s the essence of the problem. It’s not that K8s is too complex. It’s that too many devs adopted it because it was trendy, not because it fit their workload.
The 6 orchestration alternatives (and where they fit)
When people say they’re “moving away from Kubernetes,” they rarely mean they’ve banished orchestration from their stack. What they’ve really done is swap one kind of pain for another. Here are the six real alternatives I see teams moving to and what trade-offs they come with.
1. Serverless platforms (AWS Lambda, GCP Cloud Functions, Azure Functions)
Great for event-driven workloads and startups who don’t want to touch infra. But serverless ≠ simple: cold starts, opaque debugging, and vendor lock-in hit hard. Works best when your app fits the function-per-request model.
2. Lightweight Kubernetes distros (K3s, MicroK8s)
Designed for IoT, edge computing, or hobby projects. You get 80% of Kubernetes’ power with a fraction of the footprint. If full K8s is a battleship, K3s is a speedboat.
3. Managed Kubernetes (EKS, GKE, AKS)
The sweet spot for most mid-sized SaaS. You still get Kubernetes, but AWS/Google/Azure handle the undifferentiated pain control plane ops, upgrades, certificates. It’s still K8s, just less babysitting.
4. Opinionated PaaS (Heroku, Render, Fly.io, Railway)
For small teams, these hit the “just ship it” vibe. You give up deep control, but you move fast. Perfect when your app doesn’t need fine-grained orchestration.
5. Nomad (by HashiCorp)
The quiet alternative. Nomad is simpler than Kubernetes and works for both containers and non-container workloads. Fewer bells and whistles, but much lighter to learn. Think: a no-nonsense scheduler without CNCF sprawl.
6. Custom internal platforms (built on top of Kubernetes)
FAANG-style. Most “we don’t use Kubernetes” claims actually mean: “we wrapped Kubernetes so devs never see it.” The infra team still runs K8s, but app devs see a polished internal PaaS.
The point: none of these “alternatives” kill Kubernetes. They either wrap it, slim it down, or trade control for simplicity. In most cases, Kubernetes is still hiding in the basement, flipping the switches.
Decision table: when you should (and shouldn’t) use K8s
A lot of Kubernetes pain comes from using it in the wrong context. To make this bookmarkable (and editor-friendly), here’s a cheat-sheet for when Kubernetes makes sense and when it’s just self-inflicted suffering.

The decision here isn’t about hype it’s about what you actually need to ship and scale. If your team is three devs and a dog, Kubernetes is going to crush you. If you’re running multi-tenant SaaS with compliance requirements, it’s your best friend.
This is why so many “K8s is dead” takes miss the mark: they confuse misuse with obsolescence. Kubernetes isn’t dead but it is the wrong tool for a ton of teams using it.
What’s next for orchestration
If Kubernetes today feels complicated, the future won’t be “no orchestration” it’ll be orchestration you don’t see.
Think about Linux. In the 90s, sysadmins argued about distros and kernels. Today, most devs never touch the kernel directly they use Docker images, WSL, or a cloud VM that just “happens to run Linux.” Kubernetes is heading toward that same invisibility.
Platforms like Fly.io, Render, or AWS Copilot are already examples. They run orchestration under the hood, but you never see a Pod or Service definition. Even Nomad and K3s usually end up coexisting with Kubernetes in hybrid stacks. The direction is clear: you won’t manage K8s you’ll consume it.
This also explains why big companies “leaving Kubernetes” is misleading. They’re not abandoning it; they’re abstracting it. They’re building custom internal platforms that wrap Kubernetes so devs interact with a nice API instead of 2,000 lines of YAML. It’s still Kubernetes just hidden in the basement, running the lights.
The open question is: will orchestration ever disappear completely? Probably not. Distributed systems still need schedulers, scaling, service discovery, and self-healing. Whether you call it Kubernetes, Nomad, or “whatever my cloud calls it,” those problems remain.
The next five years will look less like Kubernetes’ funeral, and more like its final form: the invisible layer powering everything, cursed only by ops teams while everyone else forgets it exists.
Conclusion
So, is Kubernetes dead? Not even close. What’s dying is the myth that every team needs to run it.
The reality is simple:
- Kubernetes thrives at massive scale and in enterprises that need trust, compliance, and ecosystem depth.
- Alternatives thrive when speed, simplicity, or edge constraints matter more than fine-grained orchestration.
- Most horror stories about Kubernetes aren’t about Kubernetes at all they’re about cargo cult adoption.
And that’s why the “K8s is dead” meme refuses to die. For burned-out devs, it feels cathartic. For startups drowning in YAML debt, it feels like liberation. But if you zoom out, Kubernetes is on the same trajectory as Linux: invisible, boring, and quietly everywhere.
Here’s the slightly spicy take:
If you’re still hand-wrangling raw YAML in 2025, the problem isn’t Kubernetes. It’s you.
The future of orchestration isn’t about killing Kubernetes. It’s about hiding it so well that devs barely know it’s there. And in a way, that’s the final proof it won.
So what’s next? Probably you, in the comments, dropping your own “worst Kubernetes war story.” Because nothing bonds devs faster than shared suffering.
Helpful resources
- Kubernetes docs
- CNCF Cloud Native Landscape
- AWS Copilot CLI
- Awesome Kubernetes (GitHub)
- Shopify Engineering Blog
- Hacker News K8s debates

Top comments (0)