kandi X-RAY | cilium Summary
kandi X-RAY | cilium Summary
eBPF-based Networking, Security, and Observability
Top functions reviewed by kandi - BETA
cilium Key Features
cilium Examples and Code Snippets
Trending Discussions on cilium
I have a dataframe
df which contains a single column
GO. Each row in
df contains either one term or multiple terms (separated by
;) and each term has a specific format - it starts with either P, C or F and is followed by a
: and then the actual term.
ANSWERAnswered 2022-Apr-16 at 07:52
tidyverse approach to achieve your desired result may look like so:
I am learning loopback TCP acceleration technique based on the eBPF sockmap / redirection.
I've found that in all the relevant articles and examples, it seems that we just need to add entries to the sockmap table via the
bpf_sock_hash_update method, then look up the table and redirect via the
bpf_msg_redirect_hash method. For example: here, here, and here.
I didn't find any code to delete entries from the sockmap table (eg: call bpf_map_delete_elem etc). At the same time, I also haven't found any code in the kernel that automatically deletes entries for the closed tcp connections, for example: here.
So I'm curious, why is there no need to delete sockmap entries for closed connections in these articles and code?
And do we need to detect TCP FIN events in our ebpf code and then explicitly delete the corresponding entry in the sockmap?
ANSWERAnswered 2022-Mar-17 at 04:15
After some testing, I realized that there is no need to manually delete the entries in the sockmap table.
By observing the entries in the sockmap table using
bpftool map dump id | grep "key:" | wc -l command, you can see that the table size is always equal to twice the number of concurrent TCP connections on the loopback device.
So obviously closed TCP connections are automatically removed from the sockmap table.
I am trying to enable DNS for my pods with network policy. I am using https://kubernetes.io/docs/tasks/administer-cluster/dns-debugging-resolution/
When DNS works:...
ANSWERAnswered 2021-Oct-14 at 11:32
the port is overwriten by the dns service to 8053. the tcpdumper is running inside the pod, so it does not know it is re-routed .
ANSWERAnswered 2022-Feb-09 at 08:31
eBPF programs only unload when there are no more references to it(File descriptors, pins), but network links also hold their own references. So to unload the program, you first have to detach it from your network link.
You can do so by setting the program fd to -1:
I'm trying to compile kaniko on a raspberry pi.
I don't program in golang, but I was able to compile kaniko successfully a few weeks ago on the same raspberry pi, and even wrote myself a guide of the steps to follow, but now, following the same steps, something is broken.
go, but a more recent version of
go then found in the raspberry pi repos, so I download and install
go from scratch.
go to compile, so I first install it (an older version) from the repos, and then remove it after it's done compiling a more recent version of itself:
ANSWERAnswered 2022-Feb-04 at 19:56
Based on the comments, my suggestion is to add
$HOME/go/bin to the path and use the default GOPATH.
Go mod depends on the bin directory inside the GOPATH. It installs new packages there. The go binary itself can actually reside somewhere else. If you follow these install instruction https://go.dev/doc/install, go itself will actually be in
/usr/local/go but the
GOPATH is still
I would also recommend, not involving apt in this at all. This looks like trouble in the form of conflicts with different installations.
I am currently trying to move my calico based clusters to the new Dataplane V2, which is basically a managed Cilium offering. For local testing, I am running k3d with open source cilium installed, and created a set of NetworkPolicies (k8s native ones, not CiliumPolicies), which lock down the desired namespaces.
My current issue is, that when porting the same Policies on a GKE cluster (with DataPlane enabled), those same policies don't work.
As an example let's take a look into the connection between some app and a database:...
ANSWERAnswered 2022-Jan-04 at 14:17
Update: I was able to solve the mystery and it was ArgoCD all along. Cilium is creating an Endpoint and Identity for each object in the namespace, and Argo was deleting them after deploying the applications.
For anyone who stumbles on this, the solution is to add this exclusion to ArgoCD:
How can egress from a Kubernetes pod be limited to only specific FQDN/DNS with Azure CNI Network Policies?
This is something that can be achieved with:
ANSWERAnswered 2021-Oct-20 at 04:53
Apply K8s network policies
As the following, bpf verifier log is truncated at the last. How could I get the full log ?...
ANSWERAnswered 2021-Aug-17 at 10:23
You need to pass a larger buffer (and to indicate its length accordingly) to the verifier when you load your program.
The kernel receives a pointer to a
union bpf_attr, which for loading programs starts like this:
bpf_xdp_adjust_meta(ctx, -delta); is returning error code -13 (permission denied) when delta > 32.
But BPF and XDP Reference Guide states that there are 256 bytes headroom for metadata.
So did I misunderstand something or how can I use 256 bytes for metadata?
ANSWERAnswered 2021-Aug-09 at 08:16
The maximum room space for metadata is only 32 bytes, so what you observe is expected.
You can check this by reading the relevant kernel code, or the logs for the commit that introduced the feature.
The documentation that you cited refers to the room size for encapsulation headers that you can modify with
bpf_xdp_adjust_head(), not to the size for metadata. Admittedly it's not clear from the text (but PRs are welcome!).
I'm trying to create an internal ingress for inter-cluster communication with gke. The service that I'm trying to expose is headless and points to a kafka-broker on the cluster.
However when I try to load up the ingress, it says it cannot find the service?...
ANSWERAnswered 2021-Jun-11 at 11:12
Setting up ingress for internal load balancing requires you to configure a proxy-only subnet on the same VPC used by your GKE cluster. This subnet will be used for the load balancers proxies. You'll also need to create a fw rule to allow traffic as well.
No vulnerabilities reported
Reuse Trending Solutions
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page