Should You Run Docker Compose in Production in 2026? Analysis
A critical look at running Docker Compose in production: community debates, real-world issues, and alternatives like Podman and Kubernetes.
A Hacker News thread debating whether to run Docker Compose in production in 2026 reveals sharp trade-offs. With 80 points and 58 comments, the community tore into simplicity versus robustness, firewall conflicts, and when to graduate to Kubernetes.
Docker Compose Production Viability in 2026
The linked article from Distr.sh argues that Docker Compose remains a viable production tool for many workloads, even in 2026. It counters the common wisdom that you must move to Kubernetes as soon as you leave your laptop. The author walks through scenarios where Compose is enough: small teams, single node deployments, or legacy applications where orchestration overhead isn't justified. The HN comments paint a more nuanced picture, highlighting real pain points.
HN Community Debate: Firewalls and Alternatives
HN readers are engineers who have been burned by both Docker Compose and Kubernetes. The thread's energy comes from two camps: those who think Compose is fine for "normal CRUD services" and those who've hit walls with networking and state management.
One commenter captured the firewall friction perfectly:
"By design docker daemon creates and manages a set of firewall rules... restarting nftables... purges all the docker created rules and effectively breaks everything."
Another brought up the desire for a middle ground:
"I really want something that is Docker Compose but for Kubernetes... so that I can get to test behaviors when there are multiple copies of the software running together."
The thread also contains healthy skepticism: a third commenter simply wrote, "Should you have a turkey sandwich for lunch in 2026? I don't know buddy just do whatever."
Where Docker Compose Fails in Production
Docker Compose has a clear sweet spot, but its boundaries are often oversold. The firewall issue is a concrete example: if you manage your own host firewall, Docker's automatic iptables/nftables manipulation becomes a liability. Podman, with its rootless and daemonless architecture, sidesteps much of this. As one commenter noted, "If you're building a linux-y appliance and you need to run a few containers I think Podman is a much better and more ergonomic way of doing so."
Where Compose falters is in multi-node deployments, automated recovery, and configuration drift. Teams have run Compose successfully for years with a single docker-compose.yml and a simple CI pipeline. They don't need auto-scaling or service meshes. But the moment you need high availability, rolling updates without downtime, or fine-grained networking controls, Compose becomes a patchwork of scripts and manual steps. Kubernetes is overkill for many, but so is writing your own orchestration layer on top of Compose.
The article's original argument—that Compose is still relevant—is fair, but it dodges the real question: when does "simple enough" become "costly complexity"? The cost is not just infrastructure; it's the cognitive load of managing containers without the tooling standard in larger deployments.
Three Deployment Paths for Containers
If you're building a new project today, consider three paths:
-
Single node, low criticality – Docker Compose (or better, Podman Compose) is fine. Use systemd for service management and periodic health checks. Example: a side project or internal tool.
-
Multi-node but not massive – Look at Docker Swarm or Nomad. They provide clustering without Kubernetes complexity. Or use Compose with an NFS mount and a reverse proxy, but accept manual failover.
-
Distributed, zero-downtime required – Kubernetes. Period. The learning curve is steep, but managed services like EKS or GKE reduce the burden.
Here's a practical comparison of resource declarations:
# docker-compose.yml
version: '3'
services:
web:
image: nginx
ports:
- "80:80"
deploy:
replicas: 3 # Swarm mode only
db:
image: postgres
volumes:
- pgdata:/var/lib/postgresql/data
volumes:
pgdata:
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web
spec:
replicas: 3
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pgdata
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
The Kubernetes version is more verbose, but it offers built-in health checks, rolling updates, and self-healing. If you need those, the complexity is justified.
For the firewall issue specifically, don't treat --iptables=false as the default fix. Docker's own networking behavior depends on the NAT and forwarding rules it creates, so disabling those rules without rebuilding them yourself can break published ports and container connectivity. Safer options are to document Docker's firewall chains, test firewall reloads in staging, or switch to Podman when you need daemonless containers that fit more cleanly with nftables/ufw-managed hosts.
Final Verdict on Docker Compose in Production
Docker Compose is still a valid choice for small-scale, single-host deployments. But be aware of its limits: firewall conflicts, stateful services, and multi-host setups will require extra tooling. If you already use a managed Kubernetes service and are happy, stay. And if you're on a single server running a few containers, Docker Compose is perfectly fine—just know when to graduate. The sandwich commenter has a point: pick the tool that matches your appetite.
Links: