Every modern application works with secrets — database credentials, API keys, TLS certificates, encryption keys, third-party tokens. Yet we still encounter companies that have passwords hardcoded in code, .env files committed to Git, or shared via Slack. In 2026, this is the fastest path to a security incident.
Secrets management isn’t just about “where to put passwords.” It’s a complex discipline encompassing secure storage, distribution, rotation, audit, and revocation of secrets throughout the application lifecycle. In this article, we’ll explore why it’s important, what tools to use, and how to implement it all in practice.
Why secrets management is critical¶
Secret leakage = open doors¶
According to a GitGuardian study, over 12 million new secrets appeared in public repositories on GitHub in 2025. AWS API keys, database passwords, private keys — all freely accessible to anyone. And these are just public repositories. Private ones are often the same.
Consequences of secret leakage: - Financial losses — attackers with AWS keys can mine tens of thousands of dollars in cryptocurrencies within hours - Data breach — database credentials lead directly to customer data - Compliance violations — GDPR, NIS2, DORA require protection of access credentials - Reputational damage — public secret leakage signals security culture maturity
Where companies go wrong¶
Typical anti-patterns we see with clients:
- Hardcoded secrets in code —
DB_PASSWORD = "admin123"directly in source code .envfiles in Git — even in private repo it’s a risk (insider threat, compromised account)- Shared passwords via chat — Slack, Teams, email — everything is loggable and searchable
- One password for everything — same credentials for dev, staging, and production
- No rotation — passwords set 3 years ago, never changed
- No audit — nobody knows who used secrets when
Secrets management architecture¶
Before diving into tools, let’s define what a quality secrets management system must do:
Basic principles¶
Centralized storage — secrets live in one place, not scattered across configurations, CI/CD variables, and Confluence notes.
Encryption at rest and in transit — secrets are always encrypted, not just during transfer.
Least privilege access — each service, user, or pipeline has access only to secrets it actually needs.
Automatic rotation — keys and passwords change regularly without manual intervention.
Audit trail — every access to secrets is logged — who, when, from where.
Revocation — ability to immediately invalidate compromised secrets.
Dynamic secrets — instead of static passwords, generate temporary credentials with limited validity.
Secrets management layers¶
┌─────────────────────────────────────────┐
│ Applications / Services │
├─────────────────────────────────────────┤
│ Secret Injection Layer │
│ (sidecar, init container, SDK) │
├─────────────────────────────────────────┤
│ Secret Store Interface │
│ (CSI driver, External Secrets, API) │
├─────────────────────────────────────────┤
│ Secret Backend │
│ (Vault, AWS SM, Azure KV, GCP SM) │
├─────────────────────────────────────────┤
│ Encryption & Key Management │
│ (auto-unseal, KMS, HSM) │
└─────────────────────────────────────────┘
HashiCorp Vault — the de facto standard¶
HashiCorp Vault is the most widely adopted open-source solution for secrets management. And for good reasons.
What Vault can do¶
Secret Engines — pluggable backends for different types of secrets:
- kv (key-value) — classic secret storage
- database — dynamic database credential generation
- pki — private Certificate Authority for TLS certificates
- aws / azure / gcp — dynamic cloud credentials
- transit — encryption as a service (encryption without revealing keys)
- ssh — signed SSH certificates instead of static keys
Auth Methods — how applications and users authenticate with Vault: - Kubernetes Service Account - OIDC / JWT - AppRole (for CI/CD) - Cloud IAM (AWS, Azure, GCP) - LDAP / Active Directory - TLS certificates
Policies — granular access control:
# Policy for payment service
path "secret/data/payments/*" {
capabilities = ["read"]
}
path "database/creds/payments-readonly" {
capabilities = ["read"]
}
# Deny access to admin secrets
path "secret/data/admin/*" {
capabilities = ["deny"]
}
Vault in Kubernetes — practical implementation¶
In Kubernetes environments, Vault is typically deployed as a StatefulSet with Raft storage backend (integrated consensus):
# Vault Helm values
server:
ha:
enabled: true
replicas: 3
raft:
enabled: true
setNodeId: true
resources:
requests:
memory: 256Mi
cpu: 250m
limits:
memory: 512Mi
cpu: 500m
auditStorage:
enabled: true
size: 10Gi
# Auto-unseal via Azure Key Vault
seal:
azurekeyvault:
tenant_id: "xxx"
vault_name: "core-vault-unseal"
key_name: "vault-auto-unseal"
injector:
enabled: true
resources:
requests:
memory: 64Mi
cpu: 50m
Vault Agent Injector¶
The most elegant way to get secrets into pods in Kubernetes is the Vault Agent Injector. It works as a mutating webhook — automatically adds init container and sidecar that pull secrets into a file in the pod:
apiVersion: apps/v1
kind: Deployment
metadata:
name: payment-service
spec:
template:
metadata:
annotations:
vault.hashicorp.com/agent-inject: "true"
vault.hashicorp.com/role: "payment-service"
vault.hashicorp.com/agent-inject-secret-db: "database/creds/payments-rw"
vault.hashicorp.com/agent-inject-template-db: |
{{- with secret "database/creds/payments-rw" -}}
postgresql://{{ .Data.username }}:{{ .Data.password }}@db.core.internal:5432/payments
{{- end }}
spec:
serviceAccountName: payment-service
containers:
- name: app
image: core/payment-service:latest
env:
- name: DATABASE_URL
valueFrom:
secretKeyRef:
name: vault-db
key: connection_string
The beauty of this approach: the application doesn’t know about Vault. It reads secrets from files or environment variables — Vault Agent handles authentication, retrieval, and automatic refresh during rotation.
Dynamic Secrets — game changer¶
Static passwords are like apartment keys: once someone copies them, they have access forever. Dynamic secrets eliminate this problem.
Vault can generate database credentials on-the-fly with limited validity:
# Configure database secret engine
vault write database/config/payments \
plugin_name=postgresql-database-plugin \
connection_url="postgresql://{{username}}:{{password}}@db:5432/payments" \
allowed_roles="payments-rw,payments-ro" \
username="vault-admin" \
password="super-secret-admin-password"
# Define role with 1-hour TTL
vault write database/roles/payments-rw \
db_name=payments \
creation_statements="CREATE ROLE \"{{name}}\" WITH LOGIN PASSWORD '{{password}}' VALID UNTIL '{{expiration}}'; GRANT ALL ON ALL TABLES IN SCHEMA public TO \"{{name}}\";" \
revocation_statements="REVOKE ALL ON ALL TABLES IN SCHEMA public FROM \"{{name}}\"; DROP ROLE IF EXISTS \"{{name}}\";" \
default_ttl="1h" \
max_ttl="24h"
Every request to database/creds/payments-rw generates a unique username and password valid for 1 hour. After expiration, Vault automatically revokes credentials. If there’s a leak, credentials are valid for at most one hour — dramatic reduction in blast radius.
Mozilla SOPS — encryption for GitOps¶
Not every organization needs (or wants) to operate a full Vault cluster. For smaller teams or as a supplement to Vault, Mozilla SOPS (Secrets OPerationS) exists.
How SOPS works¶
SOPS encrypts values in YAML, JSON, or ENV files while keeping keys readable. This allows committing encrypted secrets directly to Git while still seeing configuration structure:
# Before encryption
database:
host: db.production.internal
port: 5432
username: app_user
password: SuperSecretPassword123
# After SOPS encryption
database:
host: db.production.internal # unencrypted — not a secret
port: 5432
username: ENC[AES256_GCM,data:kBqPdA==,iv:...,tag:...]
password: ENC[AES256_GCM,data:1xMfNZK3nQ==,iv:...,tag:...]
sops:
kms:
- arn: arn:aws:kms:eu-central-1:123456:key/abc-def
azure_kv:
- vaultUrl: https://core-sops.vault.azure.net
key: sops-key
version: abc123
lastmodified: "2026-02-16T10:30:00Z"
mac: ENC[AES256_GCM,data:...,tag:...]
SOPS in practice¶
# Initialize with Azure Key Vault
export SOPS_AZURE_KEYVAULT_URLS="https://core-sops.vault.azure.net/keys/sops-key/abc123"
# Encrypt file
sops --encrypt --azure-kv https://core-sops.vault.azure.net/keys/sops-key/abc123 \
secrets.yaml > secrets.enc.yaml
# Edit (decrypts, opens editor, re-encrypts)
sops secrets.enc.yaml
# Decrypt for deployment
sops --decrypt secrets.enc.yaml | kubectl apply -f -
SOPS + Flux/ArgoCD¶
For GitOps workflows, SOPS is ideal. Both Argo CD and Flux have native support:
# Flux Kustomization with SOPS decryption
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: production-secrets
spec:
interval: 10m
path: ./clusters/production/secrets
prune: true
sourceRef:
kind: GitRepository
name: infrastructure
decryption:
provider: sops
secretRef:
name: sops-azure-credentials
External Secrets Operator — bridge between worlds¶
External Secrets Operator (ESO) solves a common problem: applications in Kubernetes expect classic Secret objects, but secrets live in Vault, AWS Secrets Manager, or Azure Key Vault. ESO creates and synchronizes Kubernetes Secrets from external sources.
# External source definition (ClusterSecretStore)
apiVersion: external-secrets.io/v1beta1
kind: ClusterSecretStore
metadata:
name: azure-keyvault
spec:
provider:
azurekv:
tenantId: "xxx"
vaultUrl: "https://core-production.vault.azure.net"
authSecretRef:
clientId:
name: azure-credentials
namespace: external-secrets
key: client-id
clientSecret:
name: azure-credentials
namespace: external-secrets
key: client-secret
---
# Secret synchronization
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: payment-db-credentials
namespace: payments
spec:
refreshInterval: 5m
secretStoreRef:
kind: ClusterSecretStore
name: azure-keyvault
target:
name: payment-db-credentials
creationPolicy: Owner
data:
- secretKey: username
remoteRef:
key: payments-db-username
- secretKey: password
remoteRef:
key: payments-db-password
ESO regularly checks the external source and updates the Kubernetes Secret. If a password changes in Azure Key Vault, ESO propagates it to the cluster within 5 minutes.
Secrets in CI/CD pipelines¶
CI/CD pipelines are critical points — they need access to secrets for deployment, but they’re also high-risk environments (shared runners, logs, artifacts).
CI/CD principles¶
- Never log secrets — masking in logs isn’t enough, better not to expose secrets as variables at all
- Short-lived tokens — Vault AppRole with 15-minute TTL for pipelines
- Per-environment isolation — staging pipeline doesn’t have access to production secrets
- OIDC federation — GitHub Actions and GitLab CI support OIDC tokens that can be exchanged for Vault/cloud credentials without static secrets
# GitHub Actions — OIDC authentication with Vault
jobs:
deploy:
permissions:
id-token: write
contents: read
steps:
- name: Import secrets from Vault
uses: hashicorp/vault-action@v3
with:
url: https://vault.core.internal
method: jwt
role: github-deploy
jwtGithubAudience: https://vault.core.internal
secrets: |
secret/data/production/db password | DB_PASSWORD ;
secret/data/production/api key | API_KEY
- name: Deploy
run: |
helm upgrade --install app ./chart \
--set db.password=${{ steps.vault.outputs.DB_PASSWORD }}
Pre-commit hooks against leaks¶
Prevention is better than detection. Tools like gitleaks, trufflehog, or detect-secrets scan commits before pushing:
# .pre-commit-config.yaml
repos:
- repo: https://github.com/gitleaks/gitleaks
rev: v8.18.0
hooks:
- id: gitleaks
# Or standalone
gitleaks detect --source . --verbose
In CI/CD pipelines, they run as gates:
# GitLab CI
secret-scan:
stage: security
image: ghcr.io/gitleaks/gitleaks:latest
script:
- gitleaks detect --source . --report-format sarif --report-path gitleaks.sarif
artifacts:
reports:
sast: gitleaks.sarif
Secret rotation — automation is key¶
Manual password rotation is doomed to fail. Either it’s not done at all, or it causes outages due to outdated configurations.
Rotation strategies¶
Vault dynamic secrets — most elegant. Secrets aren’t pre-generated but created on-demand with limited validity. Rotation is implicit.
Vault rotation policy — for static secrets (third-party API keys) where Vault regularly calls rotation endpoint:
# Automatic rotation every 30 days
vault write sys/policies/password/rotation \
policy='length=32 rule "charset" { charset = "abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789!@#$%^&*" }'
Blue-green rotation — for secrets that can’t be rotated atomically: 1. Create new secret (green) 2. Update all consumers 3. Verify old secret (blue) isn’t being used 4. Revoke old secret
Monitoring and alerting¶
# Prometheus alert for expiring secrets
groups:
- name: secrets
rules:
- alert: SecretExpiringIn7Days
expr: vault_secret_lease_remaining_seconds < 604800
for: 1h
labels:
severity: warning
annotations:
summary: "Secret {{ $labels.path }} expires in less than 7 days"
- alert: SecretRotationFailed
expr: vault_secret_rotation_failures_total > 0
for: 5m
labels:
severity: critical
Cloud-native alternatives¶
Vault isn’t always the right choice. If you’re all-in on one cloud, native services might be simpler:
Azure Key Vault¶
// .NET application with Azure Key Vault
var client = new SecretClient(
new Uri("https://core-production.vault.azure.net"),
new DefaultAzureCredential()
);
KeyVaultSecret secret = await client.GetSecretAsync("database-password");
string connectionString = $"Server=db.core.internal;Password={secret.Value}";
Advantages: native Azure AD integration, managed identity (no credentials for accessing credentials), automatic rotation via Event Grid + Azure Functions.
AWS Secrets Manager¶
import boto3
client = boto3.client('secretsmanager', region_name='eu-central-1')
response = client.get_secret_value(SecretId='production/database')
secret = json.loads(response['SecretString'])
AWS offers built-in rotation for RDS, Redshift, and DocumentDB — just enable and set interval.
Comparison¶
| Feature | Vault | Azure KV | AWS SM | GCP SM |
|---|---|---|---|---|
| Multi-cloud | ✅ | ❌ | ❌ | ❌ |
| Dynamic secrets | ✅ | ❌ | ❌ | ❌ |
| PKI/CA | ✅ | ❌ | ❌ | ❌ |
| Encryption as Service | ✅ | ✅ | ✅ | ✅ |
| Automatic rotation | ✅ | ✅ | ✅ | ✅ |
| Operational costs | High | Low | Low | Low |
| Vendor lock-in | No | Yes | Yes | Yes |
| Open-source | Yes | No | No | No |
Our recommendation: Vault for multi-cloud and complex environments, cloud-native solutions for single-cloud with simple needs.
Implementation checklist¶
If you’re starting with secrets management, follow this plan:
Phase 1 — Audit (weeks 1–2)¶
- [ ] Inventory all secrets (code, CI/CD, configuration, documentation)
- [ ] Classification by sensitivity (critical, high, medium, low)
- [ ] Identify secret owners
- [ ] Scan repositories with gitleaks/trufflehog
Phase 2 — Centralization (weeks 3–4)¶
- [ ] Deploy Vault / configure cloud-native solution
- [ ] Migrate critical secrets first
- [ ] Set up access policies (least privilege)
- [ ] Enable audit logs
Phase 3 — Automation (weeks 5–8)¶
- [ ] CI/CD integration (OIDC, AppRole)
- [ ] Kubernetes integration (ESO or Vault Injector)
- [ ] Automatic rotation for database credentials
- [ ] Pre-commit hooks against leaks
Phase 4 — Monitoring (ongoing)¶
- [ ] Alerting on expiring secrets
- [ ] Regular access audits
- [ ] Penetration testing of secrets management
- [ ] Quarterly access policy reviews
Conclusion¶
Secrets management isn’t a sexy topic — until they become one. An API key leak, database password, or private certificate can have fatal consequences. Investment in proper solution pays back many times over.
Key takeaways: - Centralize — one place for all secrets - Automate rotation — manual rotation doesn’t work - Use dynamic secrets where possible — they eliminate leak problems - Scan code — pre-commit hooks catch errors before they reach Git - Audit — log every access, set up alerting
At CORE SYSTEMS, we implement secrets management as part of every project. Security isn’t an add-on — it’s a foundation. If you’re dealing with secret management in your organization, contact us — we’re happy to help with architecture and implementation.
Need help with secrets management or overall security architecture? Check out our security services or contact us directly.
Need help with implementation?
Our experts can help with design, implementation, and operations. From architecture to production.
Contact us