_CORE
AI & Agentic Systems Core Information Systems Cloud & Platform Engineering Data Platform & Integration Security & Compliance QA, Testing & Observability IoT, Automation & Robotics Mobile & Digital Banking & Finance Insurance Public Administration Defense & Security Healthcare Energy & Utilities Telco & Media Manufacturing Logistics & E-commerce Retail & Loyalty
References Technologies Blog Know-how Tools
About Collaboration Careers
CS EN
Let's talk

Kubernetes & Containers

K8s for production. Not for demo.

Managed Kubernetes with GitOps, progressive delivery and production-grade observability. From dev to production with consistent configuration.

99.95%
Cluster uptime
<5 min
Deploy time
<30s
Auto-recovery
>80%
Resource efficiency

When Kubernetes and when not

K8s is a powerful tool, but brings complexity. Decision matrix:

Scenario K8s Alternative
1-3 services, simple stack App Service, ECS, Cloud Run
5+ microservices
Multi-cloud requirement
Advanced scheduling (GPU, spot)
Service mesh need
Small team without K8s experience Start simpler

Production-Ready Kubernetes

A production cluster needs more than kubectl apply:

Networking: - Ingress controller (nginx or Traefik) with TLS termination - Network Policies for pod-to-pod isolation - Service mesh (Istio/Linkerd) for mTLS and traffic management - DNS (ExternalDNS for automatic DNS registration)

Security: - RBAC — least privilege for users and service accounts - Pod Security Standards (restricted, baseline) - Image scanning (Trivy) in CI pipeline - Runtime security (Falco) for anomaly detection - Secret management (External Secrets Operator → Vault/Key Vault)

Observability: - Metrics: Prometheus + Grafana (kube-prometheus-stack) - Logs: Loki or Elasticsearch - Traces: Jaeger or Tempo - Dashboards: cluster health, per-namespace, per-deployment

Autoscaling: - HPA (Horizontal Pod Autoscaler) on CPU/memory/custom metrics - VPA (Vertical Pod Autoscaler) for right-sizing - Cluster Autoscaler / Karpenter for node scaling - KEDA for event-driven scaling (Kafka lag, queue depth)

GitOps with ArgoCD

Declarative deployment — desired state in git, ArgoCD synchronizes:

  1. Developer pushes Helm chart / Kustomize overlay
  2. ArgoCD detects change
  3. Sync: cluster state → desired state
  4. Health check: deployment healthy?
  5. Drift detection: if someone changes manually, ArgoCD fixes it

Self-healing cluster. Audit trail in git history. Multi-cluster management from single ArgoCD instance.

Progressive Delivery

Argo Rollouts for zero-risk deploys:

  • Canary: 5% → analysis → 25% → 50% → 100%. Prometheus metrics (error rate, latency) decide automatically.
  • Blue-Green: Instant switch with instant rollback.
  • Analysis Templates: Define metrics and thresholds. Rollout automatically stops on degradation.

Časté otázky

Not always. For simple applications, App Service/ECS/Cloud Run is sufficient. K8s makes sense from 5+ services, when you need multi-cloud, custom scheduling, or advanced networking.

Managed (AKS/EKS/GKE) almost always. Self-hosted K8s only for air-gapped environments. Managed eliminates 80% of operational overhead.

Namespace isolation, Network Policies, Resource Quotas, RBAC. For stronger isolation: virtual clusters (vCluster) or dedicated node pools.

Máte projekt?

Pojďme si o něm promluvit.

Domluvit schůzku