The hottest new entrant in the technology space in the containers world is eBPF (Extended Berkeley Packet Filter). eBPF has its beginnings in the Linux kernel and has the potential to provide incremental capabilities in three different areas – observability, networking, and security. This post is an introduction to eBPF where we will discuss the key capabilities Solutions Architects and Technology Managers should look for. We will follow up with a deep-dive around individual capabilities in the following posts.
An Introduction to eBPF
The precursor to eBPF is the Berkeley Packet Filter (BPF). This was designed as an in-kernel execution engine which has been extended recently into the eBPF. The analogy to HTML and JavaScript is well described, where eBPF enables new kinds of extensions to the kernel in areas such as observability, networking, security enforcement, Service Mesh, etc. As the enhanced BPF, eBPF was written as a patch, and then features got added over time that enables new functionality over kernel-based subsystems.
eBPF enables users to run custom capabilities within the kernel of the Linux OS. These capabilities can be used to dynamically change and modify the way the kernel works, without changing the kernels source code or loading new modules. This enables the implementation of observability, security, and networking functionality.
As we have discussed many times in this blog, Kubernetes has become the de facto cloud-native platform, deployed extensively on Linux, eBPF is targeted to Linux OSs but has the hooks to enable extension into Windows-based OSs. The next post will discuss eBPF architecture in depth. eBPF enabled sandboxed programs to run lie a VM in the kernel, thus enabling developers to add additional capabilities into the kernel. These use cases will span three to four key areas.
The case for eBPF in Cloud Native Environments
There are probably four key areas for eBPF-based adoption in the container space –
- eBPF enables enhancement of security observability as it can provide visibility into system calls, process events for pods down to the kernel level
- eBPF can support networking at a kernel level, which enhances performance and latency for network operations. From the Cilium project – “The eBPF-based datapath features both IPv4 and IPv6 with the ability to support direct-routing, encapsulation/overlay topologies, as well as integration with cloud provider-specific networking layers.”
- Service Meshes/Load balancing – Again, from the Cilium project “Cilium can act as 100% kube-proxy replacement to provide all service load-balancing in a Kubernetes cluster. The implementation is highly scalable and supports direct server return (DSR) with session affinity. If possible, Cilium will perform the load balancing on the system call level and translate the address directly in the connect() system call instead of relying on network address translation throughout the entire duration of a network connection.”
- Monitoring: Enhancements on eBPF enable the monitoring of all dataflow requests, responses, and metrics to the kernel level. This can span time series data across multi-cloud clusters. Thus, this is very exciting to practitioners in the monitoring space.
Conclusion
With adoption in projects such as Cilium and its incubation into the CNCF community, eBPF has begun making its presence felt. This is because the move to Kubernetes and microservices has introduced new challenges in deploying, monitoring, and securing applications – challenges that eBPF can help address. The next post will discuss eBPF architecture in depth.