_CORE
AI & Agentic Systems Core Information Systems Cloud & Platform Engineering Data Platform & Integration Security & Compliance QA, Testing & Observability IoT, Automation & Robotics Mobile & Digital Banking & Finance Insurance Public Administration Defense & Security Healthcare Energy & Utilities Telco & Media Manufacturing Logistics & E-commerce Retail & Loyalty
References Technologies Blog Know-how Tools
About Collaboration Careers
CS EN
Let's talk

GitOps and Progressive Delivery — Safe Deployment in 2026

17. 01. 2026 4 min read CORE SYSTEMSdevelopment
GitOps and Progressive Delivery — Safe Deployment in 2026

Kubectl apply in production? Please no. In 2026, GitOps is the de facto standard for Kubernetes deployment. And progressive delivery strategies ensure that even a faulty release doesn’t kill production. How does it all work in practice?

What Is GitOps and Why It Won

GitOps is an operational model where Git is the single source of truth for infrastructure and application configuration. The declarative state in Git is automatically synchronized with the target environment. No imperative commands, no manual interventions.

The principles of GitOps are simple: declarative description of the entire system, versioned and immutable state in Git, automatic application of the approved state, and software agents ensuring convergence and alerting on drift.

In 2026, over 70% of organizations running Kubernetes have adopted GitOps. The reason is clear — audit trail, rollback with a single git revert, and elimination of “who changed what where.”

Argo CD vs Flux — Choosing a Tool

Two dominant GitOps controllers for Kubernetes:

  • Argo CD: Rich UI, application-centric model, strong dependency graph visualization. Better for teams needing an overview and self-service portal. CNCF graduated project.
  • Flux v2: Controller-based architecture, modular (source, kustomize, helm, notification controllers). Better for platform teams building internal developer platforms. Native OCI artifact support.

In practice: Argo CD for application teams (visual overview, sync status, UI rollback), Flux for platform teams (composable, API-first, Terraform integration). Both work great.

Repository Strategy

How to organize Git repositories for GitOps? Three proven patterns:

  • Monorepo: Everything in one repository — app code + manifests. Simpler for small teams. But: CI/CD pipeline gets complicated with growth.
  • Separate config repo: Application code in one repo, Kubernetes manifests in another. CI pipeline builds image → updates config repo → GitOps agent syncs. Most common enterprise pattern.
  • Repo per environment: Separate repositories for dev, staging, production. Maximum isolation, but higher management overhead. Suitable for regulated environments.

Our recommendation: separate config repo with branch-per-environment or directory-per-environment structure. Kustomize overlays for environment-specific configurations.

Progressive Delivery — Beyond Blue-Green

GitOps ensures deployment is reproducible and auditable. Progressive delivery ensures it’s safe. Strategies from simplest to most sophisticated:

  • Blue-green deployment: Two identical environments. Traffic switches at once. Fast rollback, but requires double the resources.
  • Canary deployment: The new version gets gradually increasing traffic percentage (1% → 5% → 25% → 100%). Metrics decide on continuation or rollback.
  • A/B testing: Different versions for different user segments. Requires traffic splitting at the service mesh level.
  • Feature flags: Code is deployed, but the feature is hidden behind a flag. Gradual rollout without a new deployment. Most flexible, but requires discipline in flag management.

Canary in Practice with Argo Rollouts

Argo Rollouts is a Kubernetes controller that replaces the standard Deployment resource and adds progressive delivery strategies. A typical canary flow:

  • Step 1: New version is deployed with 5% traffic. Automatic metric analysis (success rate, latency, error rate) for 5 minutes.
  • Step 2: If metrics pass, traffic increases to 25%. Another analysis.
  • Step 3: 50% traffic. At this point, a human approval gate is appropriate for critical services.
  • Step 4: 100% traffic. The canary becomes the new stable version.

Key: automatic metric analysis decides on promotion or rollback. Argo Rollouts integrates with Prometheus, Datadog, New Relic, and others. If the error rate exceeds the threshold, rollback happens automatically — without human intervention.

Service Mesh and Traffic Management

For sophisticated traffic splitting, you need a service mesh or ingress controller with corresponding capabilities:

  • Istio: Most complete solution. VirtualService for traffic splitting, DestinationRule for load balancing. But: high complexity and resource overhead.
  • Linkerd: Lightweight alternative. Rust-based proxy, low latency. Traffic splitting via SMI (Service Mesh Interface) or HTTPRoute.
  • Gateway API + NGINX/Envoy: Without full service mesh. Kubernetes Gateway API standard with traffic splitting support. Simplest path for canary.

Feature Flags — Deployment ≠ Release

Separating deployment from release is a game changer. With feature flags you can:

  • Trunk-based development: Everything goes to main, unfinished features are behind a flag. No long feature branches.
  • Gradual rollout: Enable the feature for 1% of users, measure impact, gradually expand.
  • Kill switch: A problematic feature is disabled immediately, without a new deployment.
  • Targeting: Feature for a specific customer segment, region, or environment.

Tools: LaunchDarkly (enterprise), Unleash (open-source, self-hosted), OpenFeature (vendor-neutral SDK standard). For Czech companies, we often recommend Unleash — self-hosted, GDPR friendly, sufficiently robust.

Observability-Driven Deployment

Progressive delivery without observability is like driving blindfolded. Minimum setup:

  • Golden signals: Latency, traffic, error rate, saturation — for every service.
  • SLO-based promotion: Canary continues only if SLOs hold. Burn rate alerting catches degradation faster than threshold-based alerts.
  • Deployment markers: Annotations in Grafana/Datadog show when deployment occurred. Correlation with metrics is immediately visible.
  • Automated rollback: If SLO burn rate exceeds the limit, Argo Rollouts automatically rolls back to the previous version.

Deploy Often, Deploy Safely

GitOps + progressive delivery = deploy more often with lower risk. Automatic synchronization from Git eliminates configuration drift. Canary and feature flags ensure a faulty release affects minimal users.

Our tip: Start GitOps for non-production environments. When the team gets used to the workflow, expand to production with canary deployment. Add feature flags where you need to separate releases from deploy cycles.

gitopsargo cdprogressive deliverykubernetes
Share:

CORE SYSTEMS

Stavíme core systémy a AI agenty, které drží provoz. 15 let zkušeností s enterprise IT.

Need help with implementation?

Our experts can help with design, implementation, and operations. From architecture to production.

Contact us