Cilium
Jump to navigation
Jump to search
Cilium Documentation and Downloads can be found at the Cilium project website.
Requirements and Flags
- Host nodes need to have Mounted BFP Filesystem
kube-control-managerneeds to have automatic node CIDR allocation
Flag Options
As the IP addresses used for the cluster prefix are typically allocated from RFC1918 private address blocks and are not publicly routable. Cilium will automatically masquerade the source IP address of all traffic that is leaving the cluster. This behavior can be disabled by running cilium-agent with the option --masquerade=false.
BPF Map Limitations[1]
All BPF maps are created with upper capacity limits. Insertion beyond the limit will fail and thus limits the scalability of the datapath. The following table shows the default values of the maps. Each limit can be bumped in the source code. Configuration options will be added on request if demand arises.
| Map Name | Scope | Default Limit | Scale Implications |
|---|---|---|---|
| Connection Tracking | node or endpoint | 1M TCP/256K UDP | Max 1M concurrent TCP connections, max 256K expected UDP answers |
| Endpoints | node | 64k | Max 64k local endpoints + host IPs per node |
| IP cache | node | 512K | Max 256K endpoints (IPv4+IPv6), max 512k endpoints (IPv4 or IPv6) across all clusters |
| Load Balancer | node | 64k | Max 64k cumulative backends across all services across all clusters |
| Policy | endpoint | 16k | Max 16k allowed identity + port + protocol pairs for specific endpoint |
| Proxy Map | node | 512k | Max 512k concurrent redirected TCP connections to proxy |
| Tunnel | node | 64k | Max 32k nodes (IPv4+IPv6) or 64k nodes (IPv4 or IPv6) across all clusters |
- ↑ Cilium Documentation Documentation on Cilium BPF Mapping and their limitations.