Solomon Hykes, co-founder of Docker, once said: “If WASM+WASI existed in 2008, we wouldn’t have needed to create Docker.” In 2025, WebAssembly workloads are becoming a reality in Kubernetes clusters — and are changing what a modern container looks like.
Why Containers Aren’t Enough¶
Linux containers are a fantastic technology. Isolation via namespaces and cgroups, standardized image formats, a huge ecosystem. But they have inherent limitations that are becoming increasingly apparent.
Cold start: Starting a container takes hundreds of milliseconds to seconds. For serverless and event-driven workloads, that’s too much. AWS Lambda addresses cold start with a custom runtime, but that’s a vendor-specific hack, not a solution.
Size: A minimal container image is tens of MB. Google’s distroless images reduce the attack surface but you’re still packaging the entire userspace. A Wasm module for the same functionality is hundreds of KB.
Security: Containers share the kernel with the host. Capability-based security and seccomp profiles help, but the attack surface remains large. Every container escape exploit is a reminder that isolation isn’t complete.
What WebAssembly Brings¶
WebAssembly (Wasm) was originally designed for browsers as a compilation target for C/C++/Rust. But its properties — sandboxed execution, near-native performance, platform independence — make it ideal for server-side workloads.
WASI (WebAssembly System Interface) adds standardized access to system resources — filesystem, network, clocks, random — without direct kernel access. A Wasm module runs in its own sandbox with explicitly declared capabilities. It has no access to anything the host doesn’t explicitly allow.
The Numbers Speak¶
- Cold start: under 1 ms (vs. 100-500 ms for containers)
- Module size: 100 KB – 5 MB (vs. 20-500 MB for a container image)
- Memory overhead: minimal — no OS, no runtime overhead
- Security: capability-based sandbox, no shared kernel
SpinKube — Wasm in Kubernetes¶
The SpinKube project (formerly containerd-shim-spin) integrates the WebAssembly runtime directly into Kubernetes. It works as a containerd shim — Kubernetes schedules Wasm workloads just like containers, but instead of an OCI runtime, a Wasm runtime (Wasmtime or Spin) is launched.
Kubernetes manifest for a Wasm workload¶
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-wasm-app
spec:
replicas: 3
template:
spec:
runtimeClassName: wasmtime-spin
containers:
- name: app
image: ghcr.io/my-org/my-app:latest
command: ["/"]
From an operator’s perspective, nothing changes — kubectl, Helm, ArgoCD, monitoring — everything works. Wasm workloads are first-class citizens in the Kubernetes ecosystem. The difference is under the hood: faster start, smaller footprint, stronger isolation.
Fermyon, Cosmonic, and the Ecosystem¶
Fermyon with the Spin framework offers a developer experience similar to serverless — you write a handler in Rust, Go, TypeScript, or Python, compile it to Wasm, and deploy. Spin Cloud runs entirely on Wasm without traditional containers.
Cosmonic (now part of the CNCF wasmCloud project) focuses on distributed Wasm applications with an actor model. Components communicate via capability providers — an abstraction that allows changing implementations (e.g., the database) without recompiling the application.
CNCF has adopted several Wasm projects: wasmCloud, Krustlet (archived, replaced by SpinKube), WasmEdge. The ecosystem is converging around WASI Preview 2 and the Component Model — standards that define how Wasm modules interact.
Where It Makes Sense Today¶
- Edge computing: IoT gateways with limited resources where a container is too heavy. A Wasm module runs on an ARM device with 256 MB RAM.
- Plugin systems: Secure extension of applications by third parties. Envoy proxy filters, Istio extensions, database UDFs — all in a sandboxed Wasm.
- Serverless functions: Sub-millisecond cold start eliminates the main pain point of serverless. Fermyon Spin and Fastly Compute@Edge prove this in production.
- Multi-tenant SaaS: Each tenant runs in an isolated Wasm sandbox. Thousands of tenants on a single node without data leakage concerns.
Where It Doesn’t (Yet) Make Sense¶
Stateful workloads: Wasm is designed for stateless execution. Databases, message brokers, distributed caches — those stay in containers.
Existing applications: Porting an enterprise Java application to Wasm is unrealistic. Wasm is for new workloads or components, not for lift-and-shift.
Debugging and observability: Tooling is maturing but still lags behind the container ecosystem. DWARF debug info in Wasm works, but IDE integration is limited.
Containers + Wasm = the Future¶
It’s not “containers or Wasm.” It’s containers and Wasm side by side in the same cluster. Kubernetes is the orchestration layer for both types of workloads. Heavy stateful services run in containers, lightweight event-driven functions in Wasm. One cluster, unified management, different runtime characteristics.
Wasm in K8s — Watch, Experiment¶
WebAssembly workloads in Kubernetes are not the future — they are the present. SpinKube and wasmCloud are running in production at early adopters. The ecosystem is maturing rapidly.
Recommendation: install SpinKube in your dev cluster, deploy your first Spin application, measure cold start and resource consumption. Data will convince you better than any blog post.
Need help with implementation?
Our experts can help with design, implementation, and operations. From architecture to production.
Contact us