Traditional monitoring works like an X-ray — you see an image but have to guess what’s happening inside. eBPF is like an endoscope: you go directly into the operating system kernel and observe every packet, syscall, and memory allocation in real time. Without agents. Without code instrumentation. Without restarts.
What Is eBPF and Why You Should Know¶
eBPF (extended Berkeley Packet Filter) is a technology built directly into the Linux kernel that allows running sandboxed programs in kernel space. It originally emerged for filtering network packets, but today it covers observability, security, and networking.
Key property: eBPF programs attach to hook points in the kernel — to syscalls, network events, scheduler events, file system operations. The kernel verifies them before execution (no infinite loops, no access to forbidden memory), so they are safe even in production.
Why is it revolutionary? Traditional monitoring requires either agents (overhead, maintenance) or application instrumentation (code changes, vendor lock-in). eBPF needs neither — it observes the system from within the kernel, transparently to applications.
Three Pillars of eBPF Observability¶
1. Network Observability¶
eBPF sees every packet passing through the network stack. Unlike traditional tools (tcpdump, Wireshark), it does so efficiently — filtering and aggregating directly in the kernel, sending only relevant data to user space.
- Cilium Hubble: Full L3/L4/L7 visibility in Kubernetes — who communicates with whom, latency, error rate, DNS queries
- TCP retransmissions: Packet loss detection without pcap — identifying problematic nodes in the cluster
- Service map: Automatic dependency map between microservices, without code changes
2. Application Performance¶
eBPF can monitor application performance without APM agents. Using uprobe/uretprobe, it attaches to user space functions and measures latency, allocations, and I/O operations.
Monitoring HTTP handler latency in a Go application¶
using bpftrace — without code changes!¶
bpftrace -e ‘uprobe:/usr/bin/myapp:net/http.(*ServeMux).ServeHTTP {
@start[tid] = nsecs;
}
uretprobe:/usr/bin/myapp:net/http.(*ServeMux).ServeHTTP /
@start[tid]/ {
@latency_us = hist((nsecs - @start[tid]) / 1000);
delete(@start[tid]);
}’
This approach has overhead under 2% — orders of magnitude less than traditional APM agents (typically 5–15%).
3. Security Monitoring¶
eBPF is the foundation of the new generation of runtime security tools. Instead of firewall-level rules, it monitors process behavior in real time:
- Tetragon (Cilium): Kernel-level security observability — monitoring syscalls, file access, network connections at the process level
- Falco + eBPF driver: Runtime threat detection with an eBPF backend instead of a kernel module
- Anomaly detection: A process launched a shell in a container? Connected to an unusual port? eBPF catches it immediately
eBPF in Kubernetes — Where It Makes the Most Sense¶
Kubernetes is the ideal environment for eBPF. A dynamic environment with thousands of containers, ephemeral pods, service mesh — traditional monitoring can’t keep up. eBPF solves several painful problems:
- Replacing kube-proxy: Cilium with eBPF replaces iptables — better performance, better visibility, scaling to thousands of services
- Service mesh without sidecars: Cilium Service Mesh implements mTLS and L7 policies directly in the kernel — no Envoy sidecar containers, lower overhead
- Network policies: Native enforcement in the kernel, not through iptables chains
- Pod-level metrics: CPU, memory, network traffic — aggregated directly in the kernel per cgroup
Ecosystem of Tools in 2025¶
The ecosystem around eBPF in 2025 has matured to production quality:
- Cilium 1.16+: CNI plugin for Kubernetes with full eBPF observability and service mesh
- Grafana Beyla: Auto-instrumentation of HTTP/gRPC services using eBPF — zero-code observability
- Pixie (CNCF): Kubernetes observability platform — auto-telemetry without agents
- bpftrace: High-level tracing language for ad-hoc debugging — “awk for kernel tracing”
- Tetragon: Security observability and runtime enforcement from the creators of Cilium
- Kepler: Kubernetes energy monitoring using eBPF — tracking energy consumption per pod
Practical Deployment — How to Get Started¶
eBPF requires Linux kernel 5.10+ (ideally 5.15+ or 6.x for all features). Most cloud providers (AKS, EKS, GKE) already offer nodes with a sufficiently new kernel.
- Start with Cilium: Replace the default CNI (Calico/Flannel) with Cilium — you’ll get network observability “for free”
- Enable Hubble: Dashboard for visualizing network traffic — service map, latency, DNS
- Add Grafana Beyla: Auto-instrumentation for RED metrics (Rate, Errors, Duration) without code changes
- Security with Tetragon: Runtime policies — detection of suspicious behavior in containers
- Ad-hoc debugging: Teach the team bpftrace for quick diagnosis of production issues
What to Watch Out For¶
eBPF is not a silver bullet. A few things worth knowing upfront:
- Kernel version: Older distributions (RHEL 7, CentOS 7) don’t have sufficient support — plan an upgrade
- BTF (BPF Type Format): For CO-RE (Compile Once, Run Everywhere) you need a kernel with BTF — check before deployment
- Windows: eBPF for Windows exists but is significantly behind Linux — primarily a Linux technology
- Learning curve: Writing custom eBPF programs requires knowledge of C and kernel internals — for most teams, ready-made tools are sufficient
- Privileges: eBPF programs need CAP_BPF capability — security implications in multi-tenant environments
Summary¶
eBPF is the most important infrastructure technology of the decade. In 2025, it moved from the experimental phase to mainstream production deployment. For Kubernetes clusters, Cilium is practically the standard. For observability, it offers an agentless approach with minimal overhead.
CORE SYSTEMS implements eBPF-based observability and security monitoring for Kubernetes and bare-metal infrastructure.
Need help with implementation?
Our experts can help with design, implementation, and operations. From architecture to production.
Contact us