_CORE
AI & Agentic Systems Core Information Systems Cloud & Platform Engineering Data Platform & Integration Security & Compliance QA, Testing & Observability IoT, Automation & Robotics Mobile & Digital Banking & Finance Insurance Public Administration Defense & Security Healthcare Energy & Utilities Telco & Media Manufacturing Logistics & E-commerce Retail & Loyalty
References Technologies Blog Know-how Tools
About Collaboration Careers
CS EN
Let's talk

GitOps in Practice: Git as Single Source of Truth for Infrastructure

02. 11. 2025 12 min read CORE SYSTEMSdevelopment
GitOps in Practice: Git as Single Source of Truth for Infrastructure

Most teams that say they do GitOps actually do “Git-triggered CI/CD.” The difference is fundamental. GitOps isn’t just “manifests in Git” — it’s an architectural pattern with pull-based reconciliation loop, declarative desired state, and automatic drift detection. This article explains what GitOps really is, how to deploy it with ArgoCD or Flux, and where most implementations quietly fail.

What GitOps is — and what it isn’t

The term GitOps was first used by Alexis Richardson from Weaveworks in 2017. Since then, it’s become one of the most misused buzzwords in the DevOps world. In 2021, the OpenGitOps working group under CNCF was formed, which defined four principles without which you can’t talk about GitOps.

Four OpenGitOps principles (v1.0.0)

1

Declarative

The entire system must be described declaratively. Not scripts that do something step by step, but manifests that say “this is how the system should look.” Kubernetes YAML, Terraform HCL, Helm charts — all are declarative descriptions of desired state. Imperative kubectl run or kubectl scale commands are the exact opposite of what GitOps requires.

2

Versioned and Immutable

Desired state lives in a Git repository — versioned, auditable, immutable storage. Every change is a commit. Every commit has an author, timestamp, and diff. Rollback means git revert. Audit trail comes for free. No other system in software engineering provides such transparency so cheaply.

3

Pulled Automatically

This is where most “GitOps” implementations fail. True GitOps uses pull model — an agent running inside the cluster (ArgoCD, Flux) continuously monitors the Git repository and pulls changes. Push model, where CI pipeline calls kubectl apply, isn’t GitOps. It’s CI/CD with manifests in Git. The difference: in pull model, the cluster itself knows how it should look. In push model, you depend on an external system to tell it.

4

Continuously Reconciled

The agent continuously compares actual state (what’s running in cluster) with desired state (what’s in Git). If they differ — because someone manually changed deployment, because a pod crashed, because anything — the agent automatically corrects the state to match Git. This is called reconciliation loop and it’s the heart of GitOps. Drift isn’t tolerated, it’s corrected.

Why does this matter? Because push-based CI/CD has a fundamental weakness: after deployment you don’t know what’s happening in the cluster. Someone runs kubectl edit, changes replicas, modifies env vars — and your Git repository doesn’t know about it. A week later you deploy again and those manual changes disappear. Or worse — they don’t disappear because the pipeline only deploys changed files. GitOps eliminates this problem by having the cluster continuously synchronized with Git.

ArgoCD vs Flux: Practical comparison

Two CNCF graduated projects, two approaches to the same problem. Both implement GitOps principles, but differ in philosophy, architecture, and user experience. The choice between them isn’t academic — it affects how your team works every day.

Criterion ArgoCD Flux
UI Rich web UI with resource tree visualization, diff view, sync status No native UI (community dashboards like Capacitor exist)
Architecture Centralized server + API, manages multiple clusters from one place Distributed controllers, each cluster has its own Flux instance
Multi-tenancy AppProjects with RBAC, SSO integration (OIDC, LDAP, SAML) Native Kubernetes RBAC, tenant isolation via namespace and ServiceAccount
Helm support Renders Helm charts and applies as plain manifests Native HelmRelease CRD with automatic upgrades and rollbacks
Image automation ArgoCD Image Updater (separate component) Built-in Image Reflector + Image Automation controller
Onboarding Faster thanks to UI — new team sees state immediately Steeper learning curve, but deeper Kubernetes-native integration
CNCF status Graduated (December 2022) Graduated (November 2024)

When ArgoCD

We choose ArgoCD when the team needs visual overview, when multiple teams deploy to the cluster with different permissions, or when client requires SSO integration. UI is the killer feature — you see resource tree of entire application, diff between desired and actual state, history of sync operations. For platform teams managing dozens of applications, it’s irreplaceable.

When Flux

We choose Flux for environments where Kubernetes-native approach is priority. Flux is a set of controllers — GitRepository, Kustomization, HelmRelease — that are full-fledged Kubernetes CRDs. You manage them like any other K8s resource. For teams that want maximum composability and don’t mind working via CLI and kubectl, Flux is more elegant choice. Image automation is significantly simpler in Flux — controller watches container registry, detects new tags, and automatically commits updated version to Git.

Repository strategy: Mono vs Multi

How to structure Git repositories for GitOps is a decision that will affect workflow for years. There are two schools and both have legitimate arguments.

Separate repositories: app repo + config repo

Application code lives in one repository, deployment manifests in another. CI pipeline builds image from app repo, pushes it to registry and opens PR in config repo with updated image tag. GitOps agent watches config repo.

Advantages: clean separation of concerns, different permissions for devs (app repo) and ops (config repo), CI pipeline doesn’t have cluster access. Disadvantages: two repositories to maintain, more complex onboarding, PR in config repo can be a bottleneck.

# Typical config repo structure
config-repo/
├── base/
│   ├── deployment.yaml
│   ├── service.yaml
│   └── kustomization.yaml
├── overlays/
│   ├── staging/
│   │   ├── kustomization.yaml # image: v1.24.0
│   │   └── replicas-patch.yaml
│   └── production/
│       ├── kustomization.yaml # image: v1.23.2
│       └── replicas-patch.yaml
└── README.md

Monorepo: everything in one place

Application code and deployment manifests in same repository. Simpler workflow — developer changes code and configuration in one PR. Works well for smaller teams and microservices where each service has its own repo.

Our recommendation: For most enterprise clients we choose separate repositories. App and config repo separation is safer — CI pipeline doesn’t need cluster credentials and config repo serves as audit log of all deployment changes. For startups and smaller teams, monorepo is completely legitimate choice.

ArgoCD: Production setup step by step

We install ArgoCD via Helm chart into dedicated argocd namespace. Default installation works for demo — for production several modifications are needed.

Application CRD

Every application in ArgoCD is a Kubernetes custom resource. You define from where (Git repo, path, revision) and where (cluster, namespace) to deploy. Sync policy determines whether changes are applied automatically or wait for manual approval.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: payment-service
  namespace: argocd
spec:
  project: backend-team
  source:
    repoURL: https://github.com/acme/k8s-config.git
    targetRevision: main
    path: apps/payment-service/overlays/production
  destination:
    server: https://kubernetes.default.svc
    namespace: payment
  syncPolicy:
    automated:
      prune: true # Deletes resources that disappeared from Git
      selfHeal: true # Fixes manual changes in cluster
    syncOptions:
      - CreateNamespace=true
      - PruneLast=true
    retry:
      limit: 3
      backoff:
        duration: 5s
        maxDuration: 3m
        factor: 2

Self-heal and prune: Two key settings

selfHeal: true means that if someone manually changes a resource in the cluster, ArgoCD returns it to the state defined in Git. Typically within 3 minutes (default sync interval). That’s precisely the reconciliation loop that makes GitOps GitOps.

prune: true is bolder — if you delete a manifest from Git, ArgoCD deletes the corresponding resource from the cluster. Without prune, deleted resources would accumulate like zombies. With prune, Git is truly single source of truth — what’s not there doesn’t exist.

AppProject for multi-tenancy

AppProject defines what a team can and can’t do. Which repository can be used as source, to which clusters and namespaces they can deploy, which resource types are allowed. Backend team can’t deploy ClusterRole. Frontend team can’t touch another team’s namespace. Security at GitOps operator level, not just Kubernetes RBAC level.

apiVersion: argoproj.io/v1alpha1
kind: AppProject
metadata:
  name: backend-team
  namespace: argocd
spec:
  description: Backend microservices
  sourceRepos:
    - https://github.com/acme/k8s-config.git
  destinations:
    - server: https://kubernetes.default.svc
      namespace: payment
    - server: https://kubernetes.default.svc
      namespace: orders
  clusterResourceWhitelist: [] # No cluster-scoped resources
  namespaceResourceBlacklist:
    - group: ""
      kind: ResourceQuota # Quotas managed by platform team

Secrets in GitOps: Elephant in the room

The biggest practical GitOps problem: how to get secrets into cluster when everything should be in Git, but secrets don’t belong in Git? There are three approaches, each with different trade-offs.

1. Sealed Secrets (Bitnami)

Asymmetric cryptography. Encrypt secret with public key, store in Git as SealedSecret. Controller in cluster has private key and decrypts it. Simple, works out of the box, but key is bound to specific cluster — migration is painful and key rotation requires re-encrypt of all secrets.

2. SOPS + Age/KMS

Mozilla SOPS encrypts values directly in YAML files — keys remain readable, values are encrypted. Decryption handled by Flux natively (Kustomize controller with SOPS decryption) or ArgoCD plugin. Advantage: secrets are diffable (you see that DB_PASSWORD changed, just not to what). KMS backend (AWS KMS, GCP KMS, Azure Key Vault) eliminates key management problem.

# SOPS-encrypted secret — keys readable, values encrypted
apiVersion: v1
kind: Secret
metadata:
  name: db-credentials
stringData:
  username: ENC[AES256_GCM,data:8bR2...truncated,type:str]
  password: ENC[AES256_GCM,data:kL9x...truncated,type:str]
sops:
  kms:
    - arn: arn:aws:kms:eu-central-1:123456:key/abc-def
  encrypted_suffix: _encrypted

3. External Secrets Operator

Our preferred approach for enterprise. ESO synchronizes secrets from external providers (HashiCorp Vault, AWS Secrets Manager, Azure Key Vault, GCP Secret Manager) to Kubernetes Secrets. In Git you have only ExternalSecret manifest — reference to secret, not secret itself. Rotation, audit, access control — all handled by provider. Git stays clean.

# ExternalSecret — reference to Vault, not secret itself
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
  name: db-credentials
spec:
  refreshInterval: 1h
  secretStoreRef:
    name: vault-backend
    kind: ClusterSecretStore
  target:
    name: db-credentials
  data:
    - secretKey: username
      remoteRef:
        key: secret/data/production/db
        property: username
    - secretKey: password
      remoteRef:
        key: secret/data/production/db
        property: password

Multi-cluster GitOps

Real enterprise environments don’t have one cluster. They have development, staging, production — often in different regions or at different cloud providers. GitOps must reflect this reality.

Promotion workflow: staging → production

Cleanest model: staging and production are different directories in config repo (see Kustomize overlays above). New image tag is first committed to staging overlay. After validation (smoke tests, canary metrics) a PR is opened that updates production overlay. Review, approve, merge — and ArgoCD synchronizes production. No manual deploy, full audit trail.

Alternative for advanced: ArgoCD ApplicationSet with progressive delivery. One ApplicationSet generates Application for each cluster from a list. Combined with Argo Rollouts you get canary deployment controlled by metrics — new version gets 5% traffic, if error rate stays below threshold, it gradually increases to 100%. If not, automatic rollback. Zero human intervention.

Anti-patterns of GitOps implementations

GitOps looks simple in diagrams. In practice it’s full of traps that teams fall into with predictable regularity.

Push-based “GitOps”

CI pipeline that calls kubectl apply or helm upgrade after build. Manifests are in Git, but deployment is push-based — CI needs cluster credentials, there’s no reconciliation loop, nobody detects drift. This is CI/CD with better branding, not GitOps.

Manual kubectl “just this once”

Hotfix directly in cluster because “commit and sync takes too long.” If you have selfHeal enabled, ArgoCD will revert your change within 3 minutes. If you don’t — congratulations, you have drift. Solution: speed up deployment pipeline (ArgoCD webhook notification gives sync under 30 seconds) instead of bypassing the process.

Secrets in plaintext in Git

“But it’s a private repo!” Private repo with 40 people’s access isn’t secure. Git history can’t be easily erased. Once committed secret in plaintext is compromised secret. Use SOPS, Sealed Secrets, or External Secrets Operator — see section above.

One giant Application for entire cluster

All manifests in one ArgoCD Application. Sync takes minutes, one broken manifest blocks deployment of everything else, diff view is unreadable. Granularity: one Application per microservice or per team. App of Apps pattern for orchestration.

Ignoring webhook and polling overhead

Default ArgoCD polling interval is 3 minutes. For 200 repositories that means thousands Git API calls per hour — GitHub rate limiting will stop you. Solution: webhook notification from Git provider to ArgoCD API and increase polling interval to 10–15 minutes as fallback.

How we implement GitOps at CORE SYSTEMS

We deploy GitOps for clients in regulated industries — banking, insurance, public administration — where auditability and repeatability aren’t nice-to-have, but regulatory requirement. Our approach has three phases.

Assessment. We map existing CI/CD pipeline, deployment processes, and team structure. We identify where drift occurs, where audit trail is missing, and where manual interventions cause incidents. Output is migration plan with clear ROI — how many incidents GitOps eliminates, how many hours per week it saves on deployment coordination.

Pilot. We start with one non-critical service. Set up config repo, ArgoCD/Flux, secrets management, and promotion workflow. Team learns new workflow on real project, not in workshop. After 2–4 weeks we evaluate and iterate.

Rollout. Gradual migration of other services. Documentation, runbooks, alerting on sync failures. Developer training — GitOps changes way of working and without team buy-in you end up with tool nobody uses correctly. Goal: after 3 months entire platform has GitOps workflow and every deployment is traceable from commit to running pod.

Conclusion: GitOps isn’t a tool, it’s operational philosophy

GitOps isn’t ArgoCD. It isn’t Flux. It isn’t “manifests in Git.” It’s an operational model where Git is single source of truth about infrastructure state and software agents continuously enforce this state. Tools are replaceable — principles aren’t.

Real GitOps benefits don’t show at first deployment. They show at first incident when you need rollback (one git revert). At first audit when you show complete change history. At first day of new team member who sees entire deployment state in Git log instead of in colleague’s head who’s on vacation.

If your team today deploys via kubectl apply or CI pipeline that pushes to cluster — start with small pilot. One service, one config repo, ArgoCD with default settings. You’ll see the difference in a week. In a month you won’t want to go back.

gitopsargocdfluxkubernetesci/cd
Share:

CORE SYSTEMS

Stavíme core systémy a AI agenty, které drží provoz. 15 let zkušeností s enterprise IT.

Need help with implementation?

Our experts can help with design, implementation, and operations. From architecture to production.

Contact us