cadvisor | Analyzes resource usage and performance characteristics of running containers | Continuous Deployment library
kandi X-RAY | cadvisor Summary
kandi X-RAY | cadvisor Summary
cAdvisor (Container Advisor) provides container users an understanding of the resource usage and performance characteristics of their running containers. It is a running daemon that collects, aggregates, processes, and exports information about running containers. Specifically, for each container it keeps resource isolation parameters, historical resource usage, histograms of complete historical resource usage and network statistics. This data is exported by container and machine-wide. cAdvisor has native support for Docker containers and should support just about any other container type out of the box. We strive for support across the board so feel free to open an issue if that is not the case. cAdvisor's container abstraction is based on lmctfy's so containers are inherently nested hierarchically.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of cadvisor
cadvisor Key Features
cadvisor Examples and Code Snippets
Community Discussions
Trending Discussions on cadvisor
QUESTION
For the prometheus deployment's ClusterRole I have
...ANSWER
Answered 2021-May-18 at 13:51Make sure that the /var/run/secrets/kubernetes.io/serviceaccount/token
file contains the correct token. To do so, you can enter into Prometheus pod with:
kubectl exec -it -n -- bash
and cat the token file. Then exit the pod and execute:
echo $(kubectl get secret -n -o jsonpath='{.data.token}') | base64 --decode
If the tokens match, you can try querying the Kubernetes API server with Postman or Insomnia to see if the rules you put in your ClusterRole
are correct. I suggest you to query both /proxy/metrics/cadvisor
and /proxy/metrics
URLs
QUESTION
I'm having issues with Prometheus alerting rules. I have various cAdvisor specific alerts set up, for example:
...ANSWER
Answered 2021-Apr-26 at 21:32This is due to sum
function that you use; it gathered all the time series present and added them groping BY (instance, name)
. If you run the same query in Prometheus, you'll see that sum
left only grouping labels:
QUESTION
I have deployed istio on kubernetes, and I installed prometheus from istio addons. My goal is to only monitor some pods of one application(such as all pods of bookinfo application). The job definition for monitoring pods is as below:
...ANSWER
Answered 2021-Apr-26 at 11:19The following will match all target pods with label: some_label
with any value.
QUESTION
I'm trying to find out and understand how OOM-killer works on the container.
To figuring it out, I've read lots of articles and I found out that OOM-killer kills container based on the oom_score
. And oom_score
is determined by oom_score_adj
and memory usage of that process.
And there're two metrics container_memory_working_set_bytes
, container_memory_rss
from the cAdvisor for monitoring memory usage of the container.
It seems that RSS memory (container_memory_rss
) has impact on oom_score
so I can understand that with the container_memory_rss
metric, if that metric reached to memory limit, the OOM-killer will kill the process.
- https://github.com/torvalds/linux/blob/v3.10/fs/proc/base.c#L439
- https://github.com/torvalds/linux/blob/v3.10/mm/oom_kill.c#L141
- https://github.com/torvalds/linux/blob/v3.10/include/linux/mm.h#L1136
But from the articles like below:
- https://faun.pub/how-much-is-too-much-the-linux-oomkiller-and-used-memory-d32186f29c9d
- https://blog.freshtracks.io/a-deep-dive-into-kubernetes-metrics-part-3-container-resource-metrics-361c5ee46e66
The better metric is
container_memory_working_set_bytes
as this is what the OOM killer is watching for.
I cannot understand the fact that OOM-killer is watching for container's working set memory. I think I'm not understand the meaning of the working set memory on the container which is 'total usage - inactive file'.
Where can I find the reference? Or could you explain the relationship between working set memory and OOM-kill on the container?
...ANSWER
Answered 2021-Mar-30 at 06:33As you already know, container_memory_working_set_bytes
is:
the amount of working set memory and it includes recently accessed memory, dirty memory, and kernel memory. Therefore, Working set is (lesser than or equal to)
The container_memory_working_set_bytes
is being used for OoM decisions because it excludes cached data (Linux Page Cache) that can be evicted in memory pressure scenarios.
So, if the container_memory_working_set_bytes
is increased to the limit, it will lead to oomkill.
You can find the fact that when Linux kernel checking available memory, it calls vm_enough_memory()
to find out how many pages are potentially available.
Then when the machine is low on memory, old page frames including cache will be reclaimed but kernel still may find that it was unable free enough pages to satisfy a request. Now it's time to call out_of_memory()
to kill the process. To determine the candidate process to be killed it uses oom_score
.
So when Working Set bytes reached to limits, it means that kernel cannot find availables pages even after reclaiming old pages including cache so kernel will trigger OOM-killer to kill the process.
You can find more details on the Linux kernel documents:
QUESTION
I'm trying to create a json file that has the following structure:
...ANSWER
Answered 2021-Mar-30 at 04:01As Burak commented, shouldn't your curl command be ...
QUESTION
I want to monitor docker containers running on multiple servers lets say i have a,b servers and containers running inside them, now I add one server (d) I want to monitor all docker containers inside all servers (A,B) only from server c. I have configured the docker to expose logs on all servers followed this docker docs not using cAdvisor . The target status shows 'ok' on all the servers, but the problem is as expression is same for all the containers of docker Prometheus is not able differentiate between the servers can anyone share the sample Prometheus rule file with expression i.e number of stopped containers should not be less then x .This is my current rule file
...ANSWER
Answered 2021-Mar-25 at 11:00I have added instance='ip'
to differentiate i.e
expr: engine_daemon_container_states_containers{instance="serverA",state="running"} < 10
QUESTION
I would like to have Prometheus and Grafana running on my developer machine using docker-images / docker-for-windows.
I have system-under-development, ASP.Net core, running on localhost:5001 and metrics are showing just fine on https://localhost:5001/metrics.
Docker-compose.yml and prometheus.yml listed below.
- If I include network_mode: host in docker-compose.yml, I can't access Prometheus on my physical machine via localhost:9090
- If I exclude network_mode and instead use ports: , I can access Prometheus on my physical machine via localhost:9090, but checking http://localhost:9090/targets, it shows https://localhost:5001/metrics as being down.
What am I doing wrong? Any comments welcome!
docker-compose.yml: ...ANSWER
Answered 2021-Mar-22 at 11:51Do not use host network mode on Windows, it is only supported on Linux. What you need is to change target address:
QUESTION
When staring the cAdvisor, I am getting Factory "docker" was unable to handle container "/system.slice/kdump.service"
. I am trying to understand what these are for, and ... How can resolve?
Any pointers will be appreciated.
Mydocker-compose.yml
...ANSWER
Answered 2021-Jan-28 at 07:12kdump.service
is a system service, not a Docker container. You can read more on it here. What you see in logs is debug information, telling you that cAdvisor has no handler for kernel dump service. This is not an error and it is only visible because you've increased verbosity ("-v=4"
in your command:
). You can either decrease verbosity or simply ignore these messages.
QUESTION
I have two deployments running a v1 and v2 of the same service in istio. I have set up a custom-metric 'istio-total-requests' to be collected through prometheus adapater.
I have set up an HPA to scale the v2 deployment and can see the target value increasing when I send requests, but what is not happening is the HPA is not scaling the number of pods/replicas.
I am running kubernetes v1.19 on minikube v1.13.1, and cannot understand why its not scaling.
...ANSWER
Answered 2021-Jan-07 at 09:25According to your screenshot, HPA works as expected because your metric value is lower than the threshold. If the value does not go over your threshold value, HPA will not trigger the scale up. Instead, it may trigger the scale down in your case.
The metric you are using right now is istio_requests_per_second
. That is calculated by the total request per second. The first screenshot show the average value is 200m
, which will be 0.2
. Your threshold is 10
so HPA definitely would not scale up in this case.
For the selector, it gives you the ability to select your target label under the metrics. For example, if you want to scale the instances against the GET
method only. You can define the selector matching the GET
method label in this case.
QUESTION
I have cloudflared my site which is hosted on GoDaddy.
I also set up an email for this site:
info@domain.com
It sends email fine.
No incoming email is received.
On Cloudflare it says: Add an MX record for your root domain so that mail can reach @trackpython.com addresses.
Update: I have added the correct MX record on CLOUDFLARE - but still no luck.
What else do I need to do to resolve the problem of no incoming email.
The GoDaddy CAdvisor says the "email routing" is wrong...but offers no further help.
Thanks in advance.
...ANSWER
Answered 2020-Dec-31 at 12:55I hope this helps someone:
There were two things that I needed to do to get this working:
on Godaddy - email routing: I had to change this to "LOCAL MAIL EXCHANGER"
Cloudflare: I had to set up two records on CLOUDFLARE in order for this to work
#1
Record type: MX domainname.com mail.domainname.com 1 hr DNS only
#2
Record Type: A mail xx.xxx.xxx.xx (IP ADDRESS of SERVER) Auto Proxied
Worked!
Prior to this, as mentioned in the question, I had set up the DNS in GoDaddy's WHM (set the name servers) and the created an A record pointing to the server in the DNS Management tab for the domain.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install cadvisor
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page