We’ve been running a Kubernetes cluster on-premise for half a year. And we’ve spent half a year upgrading etcd, patching master nodes, and solving storage problems. We tried Google Kubernetes Engine and ask ourselves: why didn’t we do this earlier?
Self-managed: What it cost us¶
Over 6 months of operation, we spent an estimated 200 person-hours purely on infrastructure. Upgrade from 1.5 to 1.6, from 1.6 to 1.7, etcd backup, certificate rotation, Calico upgrade, storage troubleshooting…
GKE: Master is free, Google manages it¶
gcloud container clusters create production \
--zone europe-west3-a \
--num-nodes 3 \
--machine-type n1-standard-4 \
--enable-autorepair \
--enable-autoupgrade \
--enable-network-policy
5 minutes. Production cluster with HA master, auto-repair, and network policies. On-premise took us two days.
What’s better in GKE¶
- Automatic upgrades — Google updates the master
- Native load balancer — Service type LoadBalancer creates Google Cloud LB
- Stackdriver monitoring — native integration
- Cluster autoscaler — adds and removes nodes automatically
Hybrid strategy¶
Stable workloads on-premise, new projects and experiments in GKE. Kubernetes abstraction allows moving workloads with minimal changes.
Managed Kubernetes is the right choice for most teams¶
If running infrastructure isn’t your core business, managed Kubernetes will save you hundreds of hours annually. GKE is the most mature choice, we’re also watching AKS and the upcoming EKS.
Need help with implementation?
Our experts can help with design, implementation, and operations. From architecture to production.
Contact us