kandi background
Explore Kits

metrics-server | efficient source of container resource metrics | Monitoring library

 by   kubernetes-sigs Go Version: metrics-server-helm-chart-3.8.2 License: Apache-2.0

 by   kubernetes-sigs Go Version: metrics-server-helm-chart-3.8.2 License: Apache-2.0

Download this library from

kandi X-RAY | metrics-server Summary

metrics-server is a Go library typically used in Performance Management, Monitoring, Docker, Prometheus applications. metrics-server has no bugs, it has no vulnerabilities, it has a Permissive License and it has medium support. You can download it from GitHub.
Metrics Server is a scalable, efficient source of container resource metrics for Kubernetes built-in autoscaling pipelines. Metrics Server collects resource metrics from Kubelets and exposes them in Kubernetes apiserver through Metrics API for use by Horizontal Pod Autoscaler and Vertical Pod Autoscaler. Metrics API can also be accessed by kubectl top, making it easier to debug autoscaling pipelines. Metrics Server is not meant for non-autoscaling purposes. For example, don't use it to forward metrics to monitoring solutions, or as a source of monitoring solution metrics. In such cases please collect metrics from Kubelet /metrics/resource endpoint directly.
Support
Support
Quality
Quality
Security
Security
License
License
Reuse
Reuse

kandi-support Support

  • metrics-server has a medium active ecosystem.
  • It has 3589 star(s) with 1202 fork(s). There are 79 watchers for this library.
  • There were 5 major release(s) in the last 12 months.
  • There are 21 open issues and 490 have been closed. On average issues are closed in 48 days. There are 6 open pull requests and 0 closed requests.
  • It has a neutral sentiment in the developer community.
  • The latest version of metrics-server is metrics-server-helm-chart-3.8.2
metrics-server Support
Best in #Monitoring
Average in #Monitoring
metrics-server Support
Best in #Monitoring
Average in #Monitoring

quality kandi Quality

  • metrics-server has 0 bugs and 0 code smells.
metrics-server Quality
Best in #Monitoring
Average in #Monitoring
metrics-server Quality
Best in #Monitoring
Average in #Monitoring

securitySecurity

  • metrics-server has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
  • metrics-server code analysis shows 0 unresolved vulnerabilities.
  • There are 0 security hotspots that need review.
metrics-server Security
Best in #Monitoring
Average in #Monitoring
metrics-server Security
Best in #Monitoring
Average in #Monitoring

license License

  • metrics-server is licensed under the Apache-2.0 License. This license is Permissive.
  • Permissive licenses have the least restrictions, and you can use them in most projects.
metrics-server License
Best in #Monitoring
Average in #Monitoring
metrics-server License
Best in #Monitoring
Average in #Monitoring

buildReuse

  • metrics-server releases are available to install and integrate.
  • Installation instructions, examples and code snippets are available.
  • It has 7918 lines of code, 237 functions and 50 files.
  • It has medium code complexity. Code complexity directly impacts maintainability of the code.
metrics-server Reuse
Best in #Monitoring
Average in #Monitoring
metrics-server Reuse
Best in #Monitoring
Average in #Monitoring
Top functions reviewed by kandi - BETA

kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample Here

Get all kandi verified functions for this library.

Get all kandi verified functions for this library.

metrics-server Key Features

A single deployment that works on most clusters (see Requirements)

Fast autoscaling, collecting metrics every 15 seconds.

Resource efficiency, using 1 mili core of CPU and 2 MB of memory for each node in a cluster.

Scalable support up to 5,000 node clusters.

Installation

copy iconCopydownload iconDownload
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml

High Availability

copy iconCopydownload iconDownload
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/high-availability.yaml

Configuration

copy iconCopydownload iconDownload
docker run --rm k8s.gcr.io/metrics-server/metrics-server:v0.6.0 --help

Microk8s dashboard using nginx-ingress via http not working (Error: `no matches for kind "Ingress" in version "extensions/v1beta1"`)

copy iconCopydownload iconDownload
error: unable to recognize "ingress.yaml": no matches for kind "Ingress" in version "extensions/v1beta1"
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: minimal-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - http:
      paths:
      - path: /testpath
        pathType: Prefix
        backend:
          service:
            name: test
            port:
              number: 80
error: unable to recognize "ingress.yaml": no matches for kind "Ingress" in version "extensions/v1beta1"
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: minimal-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - http:
      paths:
      - path: /testpath
        pathType: Prefix
        backend:
          service:
            name: test
            port:
              number: 80
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /$2
    nginx.ingress.kubernetes.io/configuration-snippet: |
      rewrite ^(/dashboard)$ $1/ redirect;
    nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
    kubernetes.io/ingress.class: public
  name: dashboard
  namespace: kube-system
spec:
  rules:
  - http:
      paths:
      - path: /dashboard(/|$)(.*)
        pathType: Prefix
        backend:
          service:
            name: kubernetes-dashboard
            port:
              number: 443

Deleted kube-proxy

copy iconCopydownload iconDownload
$ kubeadm init phase addon kube-proxy  --kubeconfig ~/.kube/config  --apiserver-advertise-address 192.168.22.101
[addons] Applied essential addon: kube-proxy

Kubernetes metrics-server not working with Linkerd

copy iconCopydownload iconDownload
:; kubectl top pod -n linkerd --containers
POD                                       NAME             CPU(cores)   MEMORY(bytes)   
linkerd-destination-5cfbd7468-7l22t       destination      2m           41Mi            
linkerd-destination-5cfbd7468-7l22t       linkerd-proxy    1m           13Mi            
linkerd-destination-5cfbd7468-7l22t       policy           1m           81Mi            
linkerd-destination-5cfbd7468-7l22t       sp-validator     1m           34Mi            
linkerd-identity-fc9bb697-s6dxw           identity         1m           33Mi            
linkerd-identity-fc9bb697-s6dxw           linkerd-proxy    1m           12Mi            
linkerd-proxy-injector-668455b959-rlvkj   linkerd-proxy    1m           13Mi            
linkerd-proxy-injector-668455b959-rlvkj   proxy-injector   1m           40Mi  
:; kubectl rollout restart -n linkerd deployment linkerd-destination 
deployment.apps/linkerd-destination restarted
:; while ! kubectl top pod -n linkerd --containers linkerd-destination-6d974dd4c7-vw7nw ; do sleep 10 ; done
Error from server (NotFound): podmetrics.metrics.k8s.io "linkerd/linkerd-destination-6d974dd4c7-vw7nw" not found
Error from server (NotFound): podmetrics.metrics.k8s.io "linkerd/linkerd-destination-6d974dd4c7-vw7nw" not found
Error from server (NotFound): podmetrics.metrics.k8s.io "linkerd/linkerd-destination-6d974dd4c7-vw7nw" not found
Error from server (NotFound): podmetrics.metrics.k8s.io "linkerd/linkerd-destination-6d974dd4c7-vw7nw" not found
POD                                    NAME            CPU(cores)   MEMORY(bytes)   
linkerd-destination-6d974dd4c7-vw7nw   destination     1m           25Mi            
linkerd-destination-6d974dd4c7-vw7nw   linkerd-proxy   1m           13Mi            
linkerd-destination-6d974dd4c7-vw7nw   policy          1m           18Mi            
linkerd-destination-6d974dd4c7-vw7nw   sp-validator    1m           19Mi
:; kubectl version --short
Client Version: v1.23.3
Server Version: v1.21.7+k3s1
:; kubectl top pod -n linkerd --containers
POD                                       NAME             CPU(cores)   MEMORY(bytes)   
linkerd-destination-5cfbd7468-7l22t       destination      2m           41Mi            
linkerd-destination-5cfbd7468-7l22t       linkerd-proxy    1m           13Mi            
linkerd-destination-5cfbd7468-7l22t       policy           1m           81Mi            
linkerd-destination-5cfbd7468-7l22t       sp-validator     1m           34Mi            
linkerd-identity-fc9bb697-s6dxw           identity         1m           33Mi            
linkerd-identity-fc9bb697-s6dxw           linkerd-proxy    1m           12Mi            
linkerd-proxy-injector-668455b959-rlvkj   linkerd-proxy    1m           13Mi            
linkerd-proxy-injector-668455b959-rlvkj   proxy-injector   1m           40Mi  
:; kubectl rollout restart -n linkerd deployment linkerd-destination 
deployment.apps/linkerd-destination restarted
:; while ! kubectl top pod -n linkerd --containers linkerd-destination-6d974dd4c7-vw7nw ; do sleep 10 ; done
Error from server (NotFound): podmetrics.metrics.k8s.io "linkerd/linkerd-destination-6d974dd4c7-vw7nw" not found
Error from server (NotFound): podmetrics.metrics.k8s.io "linkerd/linkerd-destination-6d974dd4c7-vw7nw" not found
Error from server (NotFound): podmetrics.metrics.k8s.io "linkerd/linkerd-destination-6d974dd4c7-vw7nw" not found
Error from server (NotFound): podmetrics.metrics.k8s.io "linkerd/linkerd-destination-6d974dd4c7-vw7nw" not found
POD                                    NAME            CPU(cores)   MEMORY(bytes)   
linkerd-destination-6d974dd4c7-vw7nw   destination     1m           25Mi            
linkerd-destination-6d974dd4c7-vw7nw   linkerd-proxy   1m           13Mi            
linkerd-destination-6d974dd4c7-vw7nw   policy          1m           18Mi            
linkerd-destination-6d974dd4c7-vw7nw   sp-validator    1m           19Mi
:; kubectl version --short
Client Version: v1.23.3
Server Version: v1.21.7+k3s1
:; kubectl top pod -n linkerd --containers
POD                                       NAME             CPU(cores)   MEMORY(bytes)   
linkerd-destination-5cfbd7468-7l22t       destination      2m           41Mi            
linkerd-destination-5cfbd7468-7l22t       linkerd-proxy    1m           13Mi            
linkerd-destination-5cfbd7468-7l22t       policy           1m           81Mi            
linkerd-destination-5cfbd7468-7l22t       sp-validator     1m           34Mi            
linkerd-identity-fc9bb697-s6dxw           identity         1m           33Mi            
linkerd-identity-fc9bb697-s6dxw           linkerd-proxy    1m           12Mi            
linkerd-proxy-injector-668455b959-rlvkj   linkerd-proxy    1m           13Mi            
linkerd-proxy-injector-668455b959-rlvkj   proxy-injector   1m           40Mi  
:; kubectl rollout restart -n linkerd deployment linkerd-destination 
deployment.apps/linkerd-destination restarted
:; while ! kubectl top pod -n linkerd --containers linkerd-destination-6d974dd4c7-vw7nw ; do sleep 10 ; done
Error from server (NotFound): podmetrics.metrics.k8s.io "linkerd/linkerd-destination-6d974dd4c7-vw7nw" not found
Error from server (NotFound): podmetrics.metrics.k8s.io "linkerd/linkerd-destination-6d974dd4c7-vw7nw" not found
Error from server (NotFound): podmetrics.metrics.k8s.io "linkerd/linkerd-destination-6d974dd4c7-vw7nw" not found
Error from server (NotFound): podmetrics.metrics.k8s.io "linkerd/linkerd-destination-6d974dd4c7-vw7nw" not found
POD                                    NAME            CPU(cores)   MEMORY(bytes)   
linkerd-destination-6d974dd4c7-vw7nw   destination     1m           25Mi            
linkerd-destination-6d974dd4c7-vw7nw   linkerd-proxy   1m           13Mi            
linkerd-destination-6d974dd4c7-vw7nw   policy          1m           18Mi            
linkerd-destination-6d974dd4c7-vw7nw   sp-validator    1m           19Mi
:; kubectl version --short
Client Version: v1.23.3
Server Version: v1.21.7+k3s1

Add Kubernetes scrape target to Prometheus instance that is NOT in Kubernetes

copy iconCopydownload iconDownload
- job_name: kubernetes
  kubernetes_sd_configs:
  - role: node
    api_server: https://kubernetes-cluster-api.com
    tls_config:
      insecure_skip_verify: true
      bearer_token: "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
  bearer_token: "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
  scheme: https
  tls_config:
    insecure_skip_verify: true
  relabel_configs:
  - separator: ;
    regex: __meta_kubernetes_node_label_(.+)
    replacement: $1
    action: labelmap
- job_name: 'kubelet-cadvisor'
  scheme: https

  kubernetes_sd_configs:
  - role: node
    api_server: https://api-server.example.com

    # TLS and auth settings to perform service discovery
    authorization:
      credentials_file: /kube/token  # the file with your service account token
    tls_config:
      ca_file: /kube/CA.crt  # the file with the CA certificate

  # The same as above but for actual scrape request.
  # We're going to send scrape requests back to the API-server
  # so the credentials are the same.
  bearer_token_file: /kube/token
  tls_config:
    ca_file: /kube/CA.crt

  relabel_configs:
  # This is just to drop this long __meta_kubernetes_node_label_ prefix
  - action: labelmap
    regex: __meta_kubernetes_node_label_(.+)

  # By default Prometheus goes to /metrics endpoint.
  # This relabeling changes it to /api/v1/nodes/[kubernetes_io_hostname]/proxy/metrics/cadvisor
  - source_labels: [kubernetes_io_hostname]
    replacement: /api/v1/nodes/$1/proxy/metrics/cadvisor
    target_label: __metrics_path__

  # This relabeling defines that Prometheus should connect to the
  # API-server instead of the actual instance. Together with the relabeling
  # from above this will make the scrape request proxied to the node kubelet.
  - replacement: api-server.example.com
    target_label: __address__
❯ kubectl config view --raw
apiVersion: v1
clusters:
- cluster:                      # you need this ⤋ long value 
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJ...
    server: https://api-server.example.com
  name: default
...
echo LS0tLS1CRUdJTiBDRVJUSUZJ... | base64 -d > CA.crt
- job_name: 'kubelet-cadvisor'
  scheme: https

  kubernetes_sd_configs:
  - role: node
    api_server: https://api-server.example.com

    # TLS and auth settings to perform service discovery
    authorization:
      credentials_file: /kube/token  # the file with your service account token
    tls_config:
      ca_file: /kube/CA.crt  # the file with the CA certificate

  # The same as above but for actual scrape request.
  # We're going to send scrape requests back to the API-server
  # so the credentials are the same.
  bearer_token_file: /kube/token
  tls_config:
    ca_file: /kube/CA.crt

  relabel_configs:
  # This is just to drop this long __meta_kubernetes_node_label_ prefix
  - action: labelmap
    regex: __meta_kubernetes_node_label_(.+)

  # By default Prometheus goes to /metrics endpoint.
  # This relabeling changes it to /api/v1/nodes/[kubernetes_io_hostname]/proxy/metrics/cadvisor
  - source_labels: [kubernetes_io_hostname]
    replacement: /api/v1/nodes/$1/proxy/metrics/cadvisor
    target_label: __metrics_path__

  # This relabeling defines that Prometheus should connect to the
  # API-server instead of the actual instance. Together with the relabeling
  # from above this will make the scrape request proxied to the node kubelet.
  - replacement: api-server.example.com
    target_label: __address__
❯ kubectl config view --raw
apiVersion: v1
clusters:
- cluster:                      # you need this ⤋ long value 
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJ...
    server: https://api-server.example.com
  name: default
...
echo LS0tLS1CRUdJTiBDRVJUSUZJ... | base64 -d > CA.crt
- job_name: 'kubelet-cadvisor'
  scheme: https

  kubernetes_sd_configs:
  - role: node
    api_server: https://api-server.example.com

    # TLS and auth settings to perform service discovery
    authorization:
      credentials_file: /kube/token  # the file with your service account token
    tls_config:
      ca_file: /kube/CA.crt  # the file with the CA certificate

  # The same as above but for actual scrape request.
  # We're going to send scrape requests back to the API-server
  # so the credentials are the same.
  bearer_token_file: /kube/token
  tls_config:
    ca_file: /kube/CA.crt

  relabel_configs:
  # This is just to drop this long __meta_kubernetes_node_label_ prefix
  - action: labelmap
    regex: __meta_kubernetes_node_label_(.+)

  # By default Prometheus goes to /metrics endpoint.
  # This relabeling changes it to /api/v1/nodes/[kubernetes_io_hostname]/proxy/metrics/cadvisor
  - source_labels: [kubernetes_io_hostname]
    replacement: /api/v1/nodes/$1/proxy/metrics/cadvisor
    target_label: __metrics_path__

  # This relabeling defines that Prometheus should connect to the
  # API-server instead of the actual instance. Together with the relabeling
  # from above this will make the scrape request proxied to the node kubelet.
  - replacement: api-server.example.com
    target_label: __address__
❯ kubectl config view --raw
apiVersion: v1
clusters:
- cluster:                      # you need this ⤋ long value 
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJ...
    server: https://api-server.example.com
  name: default
...
echo LS0tLS1CRUdJTiBDRVJUSUZJ... | base64 -d > CA.crt

Accessing a private GKE cluster via Cloud VPN

copy iconCopydownload iconDownload
resource "google_compute_network_peering_routes_config" "peer_kube02" {
  peering = google_container_cluster.gke_kube02.private_cluster_config[0].peering_name
  project = "infrastructure"
  network = "net-10-13-0-0-16"

  export_custom_routes = true
  import_custom_routes = false
}

Failed to install metrics-server on minikube

copy iconCopydownload iconDownload
minikube addons enable metrics-server

Connection refused from pod to pod via service clusterIP

copy iconCopydownload iconDownload
[Unit]
Description=Enable VPN for System
After=network.target
After=k3s.service

[Service]
Type=simple
ExecStart=/etc/openvpn/start-nordvpn-server.sh

[Install]
WantedBy=multi-user.target

Kubernetes monitoring metrics server doesn't start

copy iconCopydownload iconDownload
      affinity: 
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key:  node-role.kubernetes.io/master
                operator: In
                values:
                  - ""
      tolerations:
        - key: node-role.kubernetes.io/master
          operator: Equal
          effect: NoSchedule
          value:  ""
        - key: node.kubernetes.io/disk-pressure
          operator: Equal
          effect: NoSchedule
          value:  ""

How can a k8s namespace admin use top?

copy iconCopydownload iconDownload
apiVersion: v1
kind: ServiceAccount
metadata:
  name: test-account
  namespace: monitoring
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: pod-reader
  namespace: monitoring
rules:
- apiGroups: ["metrics.k8s.io"]
  resources: ["pods"]
  verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: read-pods
  namespace: monitoring
subjects:
- kind: ServiceAccount
  name: test-account
roleRef:
  kind: Role
  name: pod-reader
  apiGroup: rbac.authorization.k8s.io

k3s - Metrics server doesn't work for worker nodes

copy iconCopydownload iconDownload
$ kubectl logs metrics-server-58b44df574-2n9dn -n kube-system

Community Discussions

Trending Discussions on metrics-server
  • Getting Error: Cannot find module 'js-yaml'
  • Microk8s dashboard using nginx-ingress via http not working (Error: `no matches for kind "Ingress" in version "extensions/v1beta1"`)
  • Deleted kube-proxy
  • Kubernetes metrics-server not working with Linkerd
  • Add Kubernetes scrape target to Prometheus instance that is NOT in Kubernetes
  • Accessing a private GKE cluster via Cloud VPN
  • Getting Service Unavailable for service metrices command in EKS
  • Failed to install metrics-server on minikube
  • Connection refused from pod to pod via service clusterIP
  • How to remove kubernetes pods related files from file system?
Trending Discussions on metrics-server

QUESTION

Getting Error: Cannot find module 'js-yaml'

Asked 2022-Apr-11 at 07:49

I have installed js-yaml with this command

npm i @types/js-yaml

And my package.json looks like this

"dependencies": {
  "@types/js-yaml": "^4.0.5",
  "aws-cdk-lib": "2.20.0",
  "constructs": "^10.0.0",
  "source-map-support": "^0.5.16",
  "ts-sync-request": "^1.4.1"
}

And my code doesn't show any error in vscode

import * as yml from 'js-yaml';
...
const metricsServerManifestUrl = 'https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml';
const manifest = yml.loadAll(new SyncRequestClient().get(metricsServerManifestUrl));
cluster.addManifest('metrics-server', manifest);

But I'm getting this error when I try to run application

Error: Cannot find module 'js-yaml'

How can I fix this?

ANSWER

Answered 2022-Apr-11 at 07:49

The @types/js-yaml contains only type definitions used by TypeScript compiler to verify your code. However, it doesn't contain the actual implementation required at runtime.

You should install npm install js-yaml --save.

Source https://stackoverflow.com/questions/71823045

Community Discussions, Code Snippets contain sources that include Stack Exchange Network

Vulnerabilities

No vulnerabilities reported

Install metrics-server

Metrics Server can be installed either directly from YAML manifest or via the official Helm chart. To install the latest Metrics Server release from the components.yaml manifest, run the following command. Installation instructions for previous releases can be found in Metrics Server releases.

Support

Learn how to engage with the Kubernetes community on the community page.

DOWNLOAD this Library from

Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases
Explore Kits

Save this library and start creating your kit

Share this Page

share link
Consider Popular Monitoring Libraries
Try Top Libraries by kubernetes-sigs
Compare Monitoring Libraries with Highest Support
Compare Monitoring Libraries with Highest Quality
Compare Monitoring Libraries with Highest Security
Compare Monitoring Libraries with Permissive License
Compare Monitoring Libraries with Highest Reuse
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases
Explore Kits

Save this library and start creating your kit

  • © 2022 Open Weaver Inc.