We encrypt data in transit (TLS) and at rest (encryption at rest). But what about during processing? Confidential Computing addresses the last blind spot — encrypting data in memory at runtime. In 2026, this technology is leaving laboratories and entering the mainstream.
The Problem: Data-in-Use Remains Unprotected¶
The traditional security model protects data in two states — at rest and in transit. But during processing, data must be decrypted in memory, where it’s vulnerable to a whole range of attacks: a compromised hypervisor, insider threats at the cloud provider, cold boot attacks, or hardware side-channel attacks.
For organizations processing sensitive data in the cloud — medical records, financial transactions, classified information — this is a fundamental security gap. Confidential Computing closes it.
How Confidential Computing Works¶
At the core of the technology are Trusted Execution Environments (TEE) — hardware-isolated enclaves where code and data are processed in encrypted memory. Neither the operating system, hypervisor, nor cloud provider has access to the enclave’s contents.
In 2026, three main implementations are available:
- Intel TDX (Trust Domain Extensions): SGX successor, operates at the full VM level. Mainstream in Azure and GCP since 2025.
- AMD SEV-SNP (Secure Encrypted Virtualization): Full VM encryption with integrity protection. Dominant at AWS (Nitro Enclaves) and Azure.
- Arm CCA (Confidential Compute Architecture): New player targeting edge and mobile devices. First server-grade implementations in 2026.
The key concept is remote attestation — cryptographic proof that code is running in a trusted enclave on verified hardware. The client can verify that the cloud provider is actually processing data in a protected environment.
Use Cases in the Czech Context¶
Confidential Computing opens scenarios that were previously too risky for the cloud:
Public Administration and eGovernment¶
Czech government agencies have long hesitated to migrate sensitive agendas to the cloud. Reason: inability to guarantee that the cloud provider doesn’t have access to citizen data. Confidential Computing eliminates this argument. Data is encrypted even during processing — the provider sees only encrypted blocks in memory.
NÚKIB updated security standards in 2025 and explicitly included Confidential Computing as an accepted technology for processing sensitive data in the cloud.
Banking and Finance¶
Multi-party computation in confidential enclaves enables banks to share fraud detection models without sharing raw data. Each bank contributes its data to the enclave, where a shared model is trained — but no party sees the other’s data.
Healthcare¶
Analysis of healthcare data across hospitals with full patient privacy preservation. Federated learning in TEE enclaves combines the advantages of centralized training with decentralized data management.
AI and LLM Inference¶
Confidential AI is one of the fastest-growing segments. Companies can send sensitive queries to LLMs running in TEE with confidence that even the AI service provider doesn’t see prompt contents. NVIDIA H100/H200 GPUs with Confidential Computing support have enabled this since 2025.
Technical Architecture in Practice¶
A typical Confidential Computing architecture in an enterprise environment includes:
- Confidential VMs: Entire virtual machines running in TEE. Simplest migration — existing applications work without modification.
- Confidential Containers: Containers in enclaves — Azure Confidential Containers, Kata Confidential Containers. Ideal for Kubernetes workloads.
- Attestation service: Central service for verifying enclave integrity. Azure Attestation, Intel Trust Authority, or open-source solutions.
- Key management: HSM-backed key management with release policies tied to attestation. Keys are released only to a verified enclave.
Performance Impact and Limitations¶
The question we get most often: “How much does it cost in performance?” The answer in 2026:
- CPU workloads: 2–8% overhead due to the memory encryption engine (Intel TME, AMD SME). Practically negligible.
- Memory-intensive workloads: 5–15% overhead due to memory operation encryption. BIOS and hypervisor-level optimizations are reducing it.
- GPU workloads (AI/ML): 10–20% overhead with Confidential GPU. Still improving with new driver versions.
Main limitation: The ecosystem is still fragmented. Intel TDX, AMD SEV-SNP, and Arm CCA are not mutually compatible. Portability between cloud providers requires an abstraction layer (Enarx, Gramine).
Regulatory Context: NIS2 and DORA¶
The NIS2 directive (effective since October 2024) and the DORA regulation (January 2025) increase requirements for data protection during processing. Confidential Computing is not explicitly required, but significantly simplifies compliance:
- Demonstrable data protection against insider threats at the cloud provider
- Cryptographically verifiable processing integrity (attestation)
- Simpler risk assessment for cloud workloads with sensitive data
Confidential Computing Is the Future of Cloud¶
Confidential Computing in 2026 has finally reached a point where it’s practically deployable with acceptable performance impact. For organizations processing sensitive data in the cloud, the question is no longer “whether” but “when” to adopt this technology.
Our tip: Start with Confidential VMs for the most sensitive workloads. Verify the attestation flow and key management. Then transfer experience to Confidential Containers for broader deployment.
Need help with implementation?
Our experts can help with design, implementation, and operations. From architecture to production.
Contact us