kandi background
Explore Kits

grafana | composable observability and data visualization platform | Dashboard library

 by   grafana TypeScript Version: v8.5.0 License: AGPL-3.0

 by   grafana TypeScript Version: v8.5.0 License: AGPL-3.0

Download this library from

kandi X-RAY | grafana Summary

grafana is a TypeScript library typically used in Analytics, Dashboard, Prometheus, Grafana applications. grafana has no bugs, it has a Strong Copyleft License and it has medium support. However grafana has 7 vulnerabilities. You can download it from GitHub.
The open-source platform for monitoring and observability.
Support
Support
Quality
Quality
Security
Security
License
License
Reuse
Reuse

kandi-support Support

  • grafana has a medium active ecosystem.
  • It has 48159 star(s) with 9541 fork(s). There are 1265 watchers for this library.
  • There were 10 major release(s) in the last 6 months.
  • There are 2058 open issues and 21851 have been closed. On average issues are closed in 41 days. There are 233 open pull requests and 0 closed requests.
  • It has a neutral sentiment in the developer community.
  • The latest version of grafana is v8.5.0
grafana Support
Best in #Dashboard
Average in #Dashboard
grafana Support
Best in #Dashboard
Average in #Dashboard

quality kandi Quality

  • grafana has 0 bugs and 0 code smells.
grafana Quality
Best in #Dashboard
Average in #Dashboard
grafana Quality
Best in #Dashboard
Average in #Dashboard

securitySecurity

  • grafana has 7 vulnerability issues reported (0 critical, 2 high, 5 medium, 0 low).
  • grafana code analysis shows 0 unresolved vulnerabilities.
  • There are 0 security hotspots that need review.
grafana Security
Best in #Dashboard
Average in #Dashboard
grafana Security
Best in #Dashboard
Average in #Dashboard

license License

  • grafana is licensed under the AGPL-3.0 License. This license is Strong Copyleft.
  • Strong Copyleft licenses enforce sharing, and you can use them when creating open source projects.
grafana License
Best in #Dashboard
Average in #Dashboard
grafana License
Best in #Dashboard
Average in #Dashboard

buildReuse

  • grafana releases are available to install and integrate.
  • Installation instructions are available. Examples and code snippets are not available.
  • It has 236850 lines of code, 7462 functions and 5741 files.
  • It has medium code complexity. Code complexity directly impacts maintainability of the code.
grafana Reuse
Best in #Dashboard
Average in #Dashboard
grafana Reuse
Best in #Dashboard
Average in #Dashboard
Top functions reviewed by kandi - BETA

Coming Soon for all Libraries!

Currently covering the most popular Java, JavaScript and Python libraries. See a SAMPLE HERE.
kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.

grafana Key Features

Visualize: Fast and flexible client side graphs with a multitude of options. Panel plugins offer many different ways to visualize metrics and logs.

Dynamic Dashboards: Create dynamic & reusable dashboards with template variables that appear as dropdowns at the top of the dashboard.

Explore Metrics: Explore your data through ad-hoc queries and dynamic drilldown. Split view and compare different time ranges, queries and data sources side by side.

Explore Logs: Experience the magic of switching from metrics to logs with preserved label filters. Quickly search through all your logs or streaming them live.

Alerting: Visually define alert rules for your most important metrics. Grafana will continuously evaluate and send notifications to systems like Slack, PagerDuty, VictorOps, OpsGenie.

Mixed Data Sources: Mix different data sources in the same graph! You can specify a data source on a per-query basis. This works for even custom datasources.

Remove a part of a log in Loki

copy iconCopydownload iconDownload
promtail:
  enabled: true
  pipelineStages:
  - docker: {}
  - match:
      selector: '{namespace="ingress"}'
      stages:
      - replace:
          expression: "(stdout F)"
          replace: ""

Enable use of images from the local library on Kubernetes

copy iconCopydownload iconDownload
containers:
  - name: test-container
    image: testImage:latest
    imagePullPolicy: Never
-----------------------
# Start minikube and set docker env
minikube start
eval $(minikube docker-env)

# Build image
docker build -t foo:1.0 .

# Run in minikube
kubectl run hello-foo --image=foo:1.0 --image-pull-policy=Never
-----------------------
    image: wm/hello-openfaas2:0.1
    imagePullPolicy: Always

Understanding the CPU Busy Prometheus query

copy iconCopydownload iconDownload
count by (cpu) (node_cpu_seconds_total{instance="foo:9100"})

# result:
{cpu="0"} 8
{cpu="1"} 8
(
  NUM_CPU
  -
  avg(
    sum by(mode) (
      rate(node_cpu_seconds_total{mode="idle",instance="foo:9100"}[1m])
    )
  )
  * 100
)
/ NUM_CPU
    sum by(mode) (
      rate(node_cpu_seconds_total{mode="idle",instance="foo:9100"}[1m])
    )
# with by (mode)
{mode="idle"} 0.99

# without
{} 0.99
(NUM_CPU - IDLE_TIME_TOTAL * 100) / NUM_CPU
100 * (1 - avg(rate(node_cpu_seconds_total{mode="idle", instance="foo:9100"}[1m])))
-----------------------
count by (cpu) (node_cpu_seconds_total{instance="foo:9100"})

# result:
{cpu="0"} 8
{cpu="1"} 8
(
  NUM_CPU
  -
  avg(
    sum by(mode) (
      rate(node_cpu_seconds_total{mode="idle",instance="foo:9100"}[1m])
    )
  )
  * 100
)
/ NUM_CPU
    sum by(mode) (
      rate(node_cpu_seconds_total{mode="idle",instance="foo:9100"}[1m])
    )
# with by (mode)
{mode="idle"} 0.99

# without
{} 0.99
(NUM_CPU - IDLE_TIME_TOTAL * 100) / NUM_CPU
100 * (1 - avg(rate(node_cpu_seconds_total{mode="idle", instance="foo:9100"}[1m])))
-----------------------
count by (cpu) (node_cpu_seconds_total{instance="foo:9100"})

# result:
{cpu="0"} 8
{cpu="1"} 8
(
  NUM_CPU
  -
  avg(
    sum by(mode) (
      rate(node_cpu_seconds_total{mode="idle",instance="foo:9100"}[1m])
    )
  )
  * 100
)
/ NUM_CPU
    sum by(mode) (
      rate(node_cpu_seconds_total{mode="idle",instance="foo:9100"}[1m])
    )
# with by (mode)
{mode="idle"} 0.99

# without
{} 0.99
(NUM_CPU - IDLE_TIME_TOTAL * 100) / NUM_CPU
100 * (1 - avg(rate(node_cpu_seconds_total{mode="idle", instance="foo:9100"}[1m])))
-----------------------
count by (cpu) (node_cpu_seconds_total{instance="foo:9100"})

# result:
{cpu="0"} 8
{cpu="1"} 8
(
  NUM_CPU
  -
  avg(
    sum by(mode) (
      rate(node_cpu_seconds_total{mode="idle",instance="foo:9100"}[1m])
    )
  )
  * 100
)
/ NUM_CPU
    sum by(mode) (
      rate(node_cpu_seconds_total{mode="idle",instance="foo:9100"}[1m])
    )
# with by (mode)
{mode="idle"} 0.99

# without
{} 0.99
(NUM_CPU - IDLE_TIME_TOTAL * 100) / NUM_CPU
100 * (1 - avg(rate(node_cpu_seconds_total{mode="idle", instance="foo:9100"}[1m])))
-----------------------
count by (cpu) (node_cpu_seconds_total{instance="foo:9100"})

# result:
{cpu="0"} 8
{cpu="1"} 8
(
  NUM_CPU
  -
  avg(
    sum by(mode) (
      rate(node_cpu_seconds_total{mode="idle",instance="foo:9100"}[1m])
    )
  )
  * 100
)
/ NUM_CPU
    sum by(mode) (
      rate(node_cpu_seconds_total{mode="idle",instance="foo:9100"}[1m])
    )
# with by (mode)
{mode="idle"} 0.99

# without
{} 0.99
(NUM_CPU - IDLE_TIME_TOTAL * 100) / NUM_CPU
100 * (1 - avg(rate(node_cpu_seconds_total{mode="idle", instance="foo:9100"}[1m])))
-----------------------
count by (cpu) (node_cpu_seconds_total{instance="foo:9100"})

# result:
{cpu="0"} 8
{cpu="1"} 8
(
  NUM_CPU
  -
  avg(
    sum by(mode) (
      rate(node_cpu_seconds_total{mode="idle",instance="foo:9100"}[1m])
    )
  )
  * 100
)
/ NUM_CPU
    sum by(mode) (
      rate(node_cpu_seconds_total{mode="idle",instance="foo:9100"}[1m])
    )
# with by (mode)
{mode="idle"} 0.99

# without
{} 0.99
(NUM_CPU - IDLE_TIME_TOTAL * 100) / NUM_CPU
100 * (1 - avg(rate(node_cpu_seconds_total{mode="idle", instance="foo:9100"}[1m])))

Thanos-Query/Query-Frontend does not show any metrics

copy iconCopydownload iconDownload
$ helm pull prometheus-community/kube-prometheus-stack --untar
# Enable thanosService
prometheus:
  thanosService:
    enabled: true # by default it's set to false

# Add spec for thanos sidecar
prometheus:
  prometheusSpec:
    thanos:
      image: "quay.io/thanos/thanos:v0.24.0"
      version: "v0.24.0"
## This section is experimental, it may change significantly without deprecation notice in any release.
## This is experimental and may change significantly without backward compatibility in any release.
## ref: https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#thanosspec
$ helm install prometheus . -n prometheus --create-namespace # installed in prometheus namespace
$ kubectl get pods -n prometheus | grep prometheus-0
prometheus-prometheus-kube-prometheus-prometheus-0       3/3     Running   0          67s
$ helm pull bitnami/thanos --untar
query:
  dnsDiscovery:
    enabled: true
    sidecarsService: "prometheus-kube-prometheus-thanos-discovery" # service which was created before
    sidecarsNamespace: "prometheus" # namespace where prometheus is deployed
$ helm install thanos . -n thanos --create-namespace
$ kubectl logs thanos-query-xxxxxxxxx-yyyyy -n thanos
level=info ts=2022-02-24T15:32:41.418475238Z caller=endpointset.go:349 component=endpointset msg="adding new sidecar with [storeAPI rulesAPI exemplarsAPI targetsAPI MetricMetadataAPI]" address=10.44.1.213:10901 extLset="{prometheus=\"prometheus/prometheus-kube-prometheus-prometheus\", prometheus_replica=\"prometheus-prometheus-kube-prometheus-prometheus-0\"}"
-----------------------
$ helm pull prometheus-community/kube-prometheus-stack --untar
# Enable thanosService
prometheus:
  thanosService:
    enabled: true # by default it's set to false

# Add spec for thanos sidecar
prometheus:
  prometheusSpec:
    thanos:
      image: "quay.io/thanos/thanos:v0.24.0"
      version: "v0.24.0"
## This section is experimental, it may change significantly without deprecation notice in any release.
## This is experimental and may change significantly without backward compatibility in any release.
## ref: https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#thanosspec
$ helm install prometheus . -n prometheus --create-namespace # installed in prometheus namespace
$ kubectl get pods -n prometheus | grep prometheus-0
prometheus-prometheus-kube-prometheus-prometheus-0       3/3     Running   0          67s
$ helm pull bitnami/thanos --untar
query:
  dnsDiscovery:
    enabled: true
    sidecarsService: "prometheus-kube-prometheus-thanos-discovery" # service which was created before
    sidecarsNamespace: "prometheus" # namespace where prometheus is deployed
$ helm install thanos . -n thanos --create-namespace
$ kubectl logs thanos-query-xxxxxxxxx-yyyyy -n thanos
level=info ts=2022-02-24T15:32:41.418475238Z caller=endpointset.go:349 component=endpointset msg="adding new sidecar with [storeAPI rulesAPI exemplarsAPI targetsAPI MetricMetadataAPI]" address=10.44.1.213:10901 extLset="{prometheus=\"prometheus/prometheus-kube-prometheus-prometheus\", prometheus_replica=\"prometheus-prometheus-kube-prometheus-prometheus-0\"}"
-----------------------
$ helm pull prometheus-community/kube-prometheus-stack --untar
# Enable thanosService
prometheus:
  thanosService:
    enabled: true # by default it's set to false

# Add spec for thanos sidecar
prometheus:
  prometheusSpec:
    thanos:
      image: "quay.io/thanos/thanos:v0.24.0"
      version: "v0.24.0"
## This section is experimental, it may change significantly without deprecation notice in any release.
## This is experimental and may change significantly without backward compatibility in any release.
## ref: https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#thanosspec
$ helm install prometheus . -n prometheus --create-namespace # installed in prometheus namespace
$ kubectl get pods -n prometheus | grep prometheus-0
prometheus-prometheus-kube-prometheus-prometheus-0       3/3     Running   0          67s
$ helm pull bitnami/thanos --untar
query:
  dnsDiscovery:
    enabled: true
    sidecarsService: "prometheus-kube-prometheus-thanos-discovery" # service which was created before
    sidecarsNamespace: "prometheus" # namespace where prometheus is deployed
$ helm install thanos . -n thanos --create-namespace
$ kubectl logs thanos-query-xxxxxxxxx-yyyyy -n thanos
level=info ts=2022-02-24T15:32:41.418475238Z caller=endpointset.go:349 component=endpointset msg="adding new sidecar with [storeAPI rulesAPI exemplarsAPI targetsAPI MetricMetadataAPI]" address=10.44.1.213:10901 extLset="{prometheus=\"prometheus/prometheus-kube-prometheus-prometheus\", prometheus_replica=\"prometheus-prometheus-kube-prometheus-prometheus-0\"}"
-----------------------
$ helm pull prometheus-community/kube-prometheus-stack --untar
# Enable thanosService
prometheus:
  thanosService:
    enabled: true # by default it's set to false

# Add spec for thanos sidecar
prometheus:
  prometheusSpec:
    thanos:
      image: "quay.io/thanos/thanos:v0.24.0"
      version: "v0.24.0"
## This section is experimental, it may change significantly without deprecation notice in any release.
## This is experimental and may change significantly without backward compatibility in any release.
## ref: https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#thanosspec
$ helm install prometheus . -n prometheus --create-namespace # installed in prometheus namespace
$ kubectl get pods -n prometheus | grep prometheus-0
prometheus-prometheus-kube-prometheus-prometheus-0       3/3     Running   0          67s
$ helm pull bitnami/thanos --untar
query:
  dnsDiscovery:
    enabled: true
    sidecarsService: "prometheus-kube-prometheus-thanos-discovery" # service which was created before
    sidecarsNamespace: "prometheus" # namespace where prometheus is deployed
$ helm install thanos . -n thanos --create-namespace
$ kubectl logs thanos-query-xxxxxxxxx-yyyyy -n thanos
level=info ts=2022-02-24T15:32:41.418475238Z caller=endpointset.go:349 component=endpointset msg="adding new sidecar with [storeAPI rulesAPI exemplarsAPI targetsAPI MetricMetadataAPI]" address=10.44.1.213:10901 extLset="{prometheus=\"prometheus/prometheus-kube-prometheus-prometheus\", prometheus_replica=\"prometheus-prometheus-kube-prometheus-prometheus-0\"}"
-----------------------
$ helm pull prometheus-community/kube-prometheus-stack --untar
# Enable thanosService
prometheus:
  thanosService:
    enabled: true # by default it's set to false

# Add spec for thanos sidecar
prometheus:
  prometheusSpec:
    thanos:
      image: "quay.io/thanos/thanos:v0.24.0"
      version: "v0.24.0"
## This section is experimental, it may change significantly without deprecation notice in any release.
## This is experimental and may change significantly without backward compatibility in any release.
## ref: https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#thanosspec
$ helm install prometheus . -n prometheus --create-namespace # installed in prometheus namespace
$ kubectl get pods -n prometheus | grep prometheus-0
prometheus-prometheus-kube-prometheus-prometheus-0       3/3     Running   0          67s
$ helm pull bitnami/thanos --untar
query:
  dnsDiscovery:
    enabled: true
    sidecarsService: "prometheus-kube-prometheus-thanos-discovery" # service which was created before
    sidecarsNamespace: "prometheus" # namespace where prometheus is deployed
$ helm install thanos . -n thanos --create-namespace
$ kubectl logs thanos-query-xxxxxxxxx-yyyyy -n thanos
level=info ts=2022-02-24T15:32:41.418475238Z caller=endpointset.go:349 component=endpointset msg="adding new sidecar with [storeAPI rulesAPI exemplarsAPI targetsAPI MetricMetadataAPI]" address=10.44.1.213:10901 extLset="{prometheus=\"prometheus/prometheus-kube-prometheus-prometheus\", prometheus_replica=\"prometheus-prometheus-kube-prometheus-prometheus-0\"}"
-----------------------
$ helm pull prometheus-community/kube-prometheus-stack --untar
# Enable thanosService
prometheus:
  thanosService:
    enabled: true # by default it's set to false

# Add spec for thanos sidecar
prometheus:
  prometheusSpec:
    thanos:
      image: "quay.io/thanos/thanos:v0.24.0"
      version: "v0.24.0"
## This section is experimental, it may change significantly without deprecation notice in any release.
## This is experimental and may change significantly without backward compatibility in any release.
## ref: https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#thanosspec
$ helm install prometheus . -n prometheus --create-namespace # installed in prometheus namespace
$ kubectl get pods -n prometheus | grep prometheus-0
prometheus-prometheus-kube-prometheus-prometheus-0       3/3     Running   0          67s
$ helm pull bitnami/thanos --untar
query:
  dnsDiscovery:
    enabled: true
    sidecarsService: "prometheus-kube-prometheus-thanos-discovery" # service which was created before
    sidecarsNamespace: "prometheus" # namespace where prometheus is deployed
$ helm install thanos . -n thanos --create-namespace
$ kubectl logs thanos-query-xxxxxxxxx-yyyyy -n thanos
level=info ts=2022-02-24T15:32:41.418475238Z caller=endpointset.go:349 component=endpointset msg="adding new sidecar with [storeAPI rulesAPI exemplarsAPI targetsAPI MetricMetadataAPI]" address=10.44.1.213:10901 extLset="{prometheus=\"prometheus/prometheus-kube-prometheus-prometheus\", prometheus_replica=\"prometheus-prometheus-kube-prometheus-prometheus-0\"}"
-----------------------
$ helm pull prometheus-community/kube-prometheus-stack --untar
# Enable thanosService
prometheus:
  thanosService:
    enabled: true # by default it's set to false

# Add spec for thanos sidecar
prometheus:
  prometheusSpec:
    thanos:
      image: "quay.io/thanos/thanos:v0.24.0"
      version: "v0.24.0"
## This section is experimental, it may change significantly without deprecation notice in any release.
## This is experimental and may change significantly without backward compatibility in any release.
## ref: https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#thanosspec
$ helm install prometheus . -n prometheus --create-namespace # installed in prometheus namespace
$ kubectl get pods -n prometheus | grep prometheus-0
prometheus-prometheus-kube-prometheus-prometheus-0       3/3     Running   0          67s
$ helm pull bitnami/thanos --untar
query:
  dnsDiscovery:
    enabled: true
    sidecarsService: "prometheus-kube-prometheus-thanos-discovery" # service which was created before
    sidecarsNamespace: "prometheus" # namespace where prometheus is deployed
$ helm install thanos . -n thanos --create-namespace
$ kubectl logs thanos-query-xxxxxxxxx-yyyyy -n thanos
level=info ts=2022-02-24T15:32:41.418475238Z caller=endpointset.go:349 component=endpointset msg="adding new sidecar with [storeAPI rulesAPI exemplarsAPI targetsAPI MetricMetadataAPI]" address=10.44.1.213:10901 extLset="{prometheus=\"prometheus/prometheus-kube-prometheus-prometheus\", prometheus_replica=\"prometheus-prometheus-kube-prometheus-prometheus-0\"}"
-----------------------
$ helm pull prometheus-community/kube-prometheus-stack --untar
# Enable thanosService
prometheus:
  thanosService:
    enabled: true # by default it's set to false

# Add spec for thanos sidecar
prometheus:
  prometheusSpec:
    thanos:
      image: "quay.io/thanos/thanos:v0.24.0"
      version: "v0.24.0"
## This section is experimental, it may change significantly without deprecation notice in any release.
## This is experimental and may change significantly without backward compatibility in any release.
## ref: https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#thanosspec
$ helm install prometheus . -n prometheus --create-namespace # installed in prometheus namespace
$ kubectl get pods -n prometheus | grep prometheus-0
prometheus-prometheus-kube-prometheus-prometheus-0       3/3     Running   0          67s
$ helm pull bitnami/thanos --untar
query:
  dnsDiscovery:
    enabled: true
    sidecarsService: "prometheus-kube-prometheus-thanos-discovery" # service which was created before
    sidecarsNamespace: "prometheus" # namespace where prometheus is deployed
$ helm install thanos . -n thanos --create-namespace
$ kubectl logs thanos-query-xxxxxxxxx-yyyyy -n thanos
level=info ts=2022-02-24T15:32:41.418475238Z caller=endpointset.go:349 component=endpointset msg="adding new sidecar with [storeAPI rulesAPI exemplarsAPI targetsAPI MetricMetadataAPI]" address=10.44.1.213:10901 extLset="{prometheus=\"prometheus/prometheus-kube-prometheus-prometheus\", prometheus_replica=\"prometheus-prometheus-kube-prometheus-prometheus-0\"}"
-----------------------
$ helm pull prometheus-community/kube-prometheus-stack --untar
# Enable thanosService
prometheus:
  thanosService:
    enabled: true # by default it's set to false

# Add spec for thanos sidecar
prometheus:
  prometheusSpec:
    thanos:
      image: "quay.io/thanos/thanos:v0.24.0"
      version: "v0.24.0"
## This section is experimental, it may change significantly without deprecation notice in any release.
## This is experimental and may change significantly without backward compatibility in any release.
## ref: https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#thanosspec
$ helm install prometheus . -n prometheus --create-namespace # installed in prometheus namespace
$ kubectl get pods -n prometheus | grep prometheus-0
prometheus-prometheus-kube-prometheus-prometheus-0       3/3     Running   0          67s
$ helm pull bitnami/thanos --untar
query:
  dnsDiscovery:
    enabled: true
    sidecarsService: "prometheus-kube-prometheus-thanos-discovery" # service which was created before
    sidecarsNamespace: "prometheus" # namespace where prometheus is deployed
$ helm install thanos . -n thanos --create-namespace
$ kubectl logs thanos-query-xxxxxxxxx-yyyyy -n thanos
level=info ts=2022-02-24T15:32:41.418475238Z caller=endpointset.go:349 component=endpointset msg="adding new sidecar with [storeAPI rulesAPI exemplarsAPI targetsAPI MetricMetadataAPI]" address=10.44.1.213:10901 extLset="{prometheus=\"prometheus/prometheus-kube-prometheus-prometheus\", prometheus_replica=\"prometheus-prometheus-kube-prometheus-prometheus-0\"}"
-----------------------
$ helm pull prometheus-community/kube-prometheus-stack --untar
# Enable thanosService
prometheus:
  thanosService:
    enabled: true # by default it's set to false

# Add spec for thanos sidecar
prometheus:
  prometheusSpec:
    thanos:
      image: "quay.io/thanos/thanos:v0.24.0"
      version: "v0.24.0"
## This section is experimental, it may change significantly without deprecation notice in any release.
## This is experimental and may change significantly without backward compatibility in any release.
## ref: https://github.com/prometheus-operator/prometheus-operator/blob/main/Documentation/api.md#thanosspec
$ helm install prometheus . -n prometheus --create-namespace # installed in prometheus namespace
$ kubectl get pods -n prometheus | grep prometheus-0
prometheus-prometheus-kube-prometheus-prometheus-0       3/3     Running   0          67s
$ helm pull bitnami/thanos --untar
query:
  dnsDiscovery:
    enabled: true
    sidecarsService: "prometheus-kube-prometheus-thanos-discovery" # service which was created before
    sidecarsNamespace: "prometheus" # namespace where prometheus is deployed
$ helm install thanos . -n thanos --create-namespace
$ kubectl logs thanos-query-xxxxxxxxx-yyyyy -n thanos
level=info ts=2022-02-24T15:32:41.418475238Z caller=endpointset.go:349 component=endpointset msg="adding new sidecar with [storeAPI rulesAPI exemplarsAPI targetsAPI MetricMetadataAPI]" address=10.44.1.213:10901 extLset="{prometheus=\"prometheus/prometheus-kube-prometheus-prometheus\", prometheus_replica=\"prometheus-prometheus-kube-prometheus-prometheus-0\"}"

Add Kubernetes scrape target to Prometheus instance that is NOT in Kubernetes

copy iconCopydownload iconDownload
- job_name: kubernetes
  kubernetes_sd_configs:
  - role: node
    api_server: https://kubernetes-cluster-api.com
    tls_config:
      insecure_skip_verify: true
      bearer_token: "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
  bearer_token: "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
  scheme: https
  tls_config:
    insecure_skip_verify: true
  relabel_configs:
  - separator: ;
    regex: __meta_kubernetes_node_label_(.+)
    replacement: $1
    action: labelmap
-----------------------
- job_name: 'kubelet-cadvisor'
  scheme: https

  kubernetes_sd_configs:
  - role: node
    api_server: https://api-server.example.com

    # TLS and auth settings to perform service discovery
    authorization:
      credentials_file: /kube/token  # the file with your service account token
    tls_config:
      ca_file: /kube/CA.crt  # the file with the CA certificate

  # The same as above but for actual scrape request.
  # We're going to send scrape requests back to the API-server
  # so the credentials are the same.
  bearer_token_file: /kube/token
  tls_config:
    ca_file: /kube/CA.crt

  relabel_configs:
  # This is just to drop this long __meta_kubernetes_node_label_ prefix
  - action: labelmap
    regex: __meta_kubernetes_node_label_(.+)

  # By default Prometheus goes to /metrics endpoint.
  # This relabeling changes it to /api/v1/nodes/[kubernetes_io_hostname]/proxy/metrics/cadvisor
  - source_labels: [kubernetes_io_hostname]
    replacement: /api/v1/nodes/$1/proxy/metrics/cadvisor
    target_label: __metrics_path__

  # This relabeling defines that Prometheus should connect to the
  # API-server instead of the actual instance. Together with the relabeling
  # from above this will make the scrape request proxied to the node kubelet.
  - replacement: api-server.example.com
    target_label: __address__
❯ kubectl config view --raw
apiVersion: v1
clusters:
- cluster:                      # you need this ⤋ long value 
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJ...
    server: https://api-server.example.com
  name: default
...
echo LS0tLS1CRUdJTiBDRVJUSUZJ... | base64 -d > CA.crt
-----------------------
- job_name: 'kubelet-cadvisor'
  scheme: https

  kubernetes_sd_configs:
  - role: node
    api_server: https://api-server.example.com

    # TLS and auth settings to perform service discovery
    authorization:
      credentials_file: /kube/token  # the file with your service account token
    tls_config:
      ca_file: /kube/CA.crt  # the file with the CA certificate

  # The same as above but for actual scrape request.
  # We're going to send scrape requests back to the API-server
  # so the credentials are the same.
  bearer_token_file: /kube/token
  tls_config:
    ca_file: /kube/CA.crt

  relabel_configs:
  # This is just to drop this long __meta_kubernetes_node_label_ prefix
  - action: labelmap
    regex: __meta_kubernetes_node_label_(.+)

  # By default Prometheus goes to /metrics endpoint.
  # This relabeling changes it to /api/v1/nodes/[kubernetes_io_hostname]/proxy/metrics/cadvisor
  - source_labels: [kubernetes_io_hostname]
    replacement: /api/v1/nodes/$1/proxy/metrics/cadvisor
    target_label: __metrics_path__

  # This relabeling defines that Prometheus should connect to the
  # API-server instead of the actual instance. Together with the relabeling
  # from above this will make the scrape request proxied to the node kubelet.
  - replacement: api-server.example.com
    target_label: __address__
❯ kubectl config view --raw
apiVersion: v1
clusters:
- cluster:                      # you need this ⤋ long value 
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJ...
    server: https://api-server.example.com
  name: default
...
echo LS0tLS1CRUdJTiBDRVJUSUZJ... | base64 -d > CA.crt
-----------------------
- job_name: 'kubelet-cadvisor'
  scheme: https

  kubernetes_sd_configs:
  - role: node
    api_server: https://api-server.example.com

    # TLS and auth settings to perform service discovery
    authorization:
      credentials_file: /kube/token  # the file with your service account token
    tls_config:
      ca_file: /kube/CA.crt  # the file with the CA certificate

  # The same as above but for actual scrape request.
  # We're going to send scrape requests back to the API-server
  # so the credentials are the same.
  bearer_token_file: /kube/token
  tls_config:
    ca_file: /kube/CA.crt

  relabel_configs:
  # This is just to drop this long __meta_kubernetes_node_label_ prefix
  - action: labelmap
    regex: __meta_kubernetes_node_label_(.+)

  # By default Prometheus goes to /metrics endpoint.
  # This relabeling changes it to /api/v1/nodes/[kubernetes_io_hostname]/proxy/metrics/cadvisor
  - source_labels: [kubernetes_io_hostname]
    replacement: /api/v1/nodes/$1/proxy/metrics/cadvisor
    target_label: __metrics_path__

  # This relabeling defines that Prometheus should connect to the
  # API-server instead of the actual instance. Together with the relabeling
  # from above this will make the scrape request proxied to the node kubelet.
  - replacement: api-server.example.com
    target_label: __address__
❯ kubectl config view --raw
apiVersion: v1
clusters:
- cluster:                      # you need this ⤋ long value 
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJ...
    server: https://api-server.example.com
  name: default
...
echo LS0tLS1CRUdJTiBDRVJUSUZJ... | base64 -d > CA.crt

Trigger Beam ParDo at window closing only

copy iconCopydownload iconDownload
// We first specify to never emit any panes
 .triggering(Never.ever())
 
 // We then specify to fire always when closing the window. This will emit a
 // single final pane at the end of allowedLateness
 .withAllowedLateness(allowedLateness, Window.ClosingBehavior.FIRE_ALWAYS)
 .discardingFiredPanes())

where is istio filtering trace headers like x-b3-*

copy iconCopydownload iconDownload

Field Name      Request/ Response Type          Description
x-request-id    request The x-request-idheader is used by Envoy to uniquely identify a request as well as perform stable access logging and tracing
x-b3-traceid    request The x-b3-traceidHTTP header is used by the Zipkin tracer in Envoy. The TraceId is 64-bit in length and indicates the overall ID of the trace. Every span in a trace shares this ID
x-b3-spanid     request The x-b3-spanidHTTP header is used by the Zipkin tracer in Envoy. The SpanId is 64-bit in length and indicates the position of the current operation in the trace tree
x-b3-sampled    request The x-b3-sampledHTTP header is used by the Zipkin tracer in Envoy. When the Sampled flag is either not specified or set to 1, the span will be reported to the tracing system

Alerts in K8s for Pod failing

copy iconCopydownload iconDownload
  - alert: KubernetesPodNotHealthy
    expr: min_over_time(sum by (namespace, pod) (kube_pod_status_phase{phase=~"Pending|Unknown|Failed"})[15m:1m]) > 0
    for: 0m
    labels:
      severity: critical
    annotations:
      summary: Kubernetes Pod not healthy (instance {{ $labels.instance }})
      description: "Pod has been in a non-ready state for longer than 15 minutes.\n  V
  - alert: KubernetesPodCrashLooping
    expr: increase(kube_pod_container_status_restarts_total[1m]) > 3
    for: 2m
    labels:
      severity: warning
    annotations:
      summary: Kubernetes pod crash looping (instance {{ $labels.instance }})
      description: "Pod {{ $labels.pod }} is crash looping\n  VALUE = {{ $value }}\n  LABELS = {{ $labels }}"
-----------------------
  - alert: KubernetesPodNotHealthy
    expr: min_over_time(sum by (namespace, pod) (kube_pod_status_phase{phase=~"Pending|Unknown|Failed"})[15m:1m]) > 0
    for: 0m
    labels:
      severity: critical
    annotations:
      summary: Kubernetes Pod not healthy (instance {{ $labels.instance }})
      description: "Pod has been in a non-ready state for longer than 15 minutes.\n  V
  - alert: KubernetesPodCrashLooping
    expr: increase(kube_pod_container_status_restarts_total[1m]) > 3
    for: 2m
    labels:
      severity: warning
    annotations:
      summary: Kubernetes pod crash looping (instance {{ $labels.instance }})
      description: "Pod {{ $labels.pod }} is crash looping\n  VALUE = {{ $value }}\n  LABELS = {{ $labels }}"

Loki display log message and extra fields separately

copy iconCopydownload iconDownload
{appname=~".+"} |= "HornetQ"
  | json
  | line_format "{{ .message }}"
{appname=~".+"} |= "HornetQ"
  | pattern `<_entry>` 
  | json
  | line_format "{{ .message }}\n{{ range $k, $v := (fromJson ._entry)}}{{if ne $k \"message\"}}{{$k}}: {{$v}} {{ end }}{{ end }}"
{appname=~".+"}
  | pattern `<_entry>` 
  | json
  | line_format "\033[1;37m{{ .message }}\033[0m\n{{ range $k, $v := (fromJson ._entry)}}{{if ne $k \"message\"}}\033[1;30m{{$k}}: \033[0m\033[2;37m{{$v}}\033[0m {{ end }}{{ end }}"
-----------------------
{appname=~".+"} |= "HornetQ"
  | json
  | line_format "{{ .message }}"
{appname=~".+"} |= "HornetQ"
  | pattern `<_entry>` 
  | json
  | line_format "{{ .message }}\n{{ range $k, $v := (fromJson ._entry)}}{{if ne $k \"message\"}}{{$k}}: {{$v}} {{ end }}{{ end }}"
{appname=~".+"}
  | pattern `<_entry>` 
  | json
  | line_format "\033[1;37m{{ .message }}\033[0m\n{{ range $k, $v := (fromJson ._entry)}}{{if ne $k \"message\"}}\033[1;30m{{$k}}: \033[0m\033[2;37m{{$v}}\033[0m {{ end }}{{ end }}"
-----------------------
{appname=~".+"} |= "HornetQ"
  | json
  | line_format "{{ .message }}"
{appname=~".+"} |= "HornetQ"
  | pattern `<_entry>` 
  | json
  | line_format "{{ .message }}\n{{ range $k, $v := (fromJson ._entry)}}{{if ne $k \"message\"}}{{$k}}: {{$v}} {{ end }}{{ end }}"
{appname=~".+"}
  | pattern `<_entry>` 
  | json
  | line_format "\033[1;37m{{ .message }}\033[0m\n{{ range $k, $v := (fromJson ._entry)}}{{if ne $k \"message\"}}\033[1;30m{{$k}}: \033[0m\033[2;37m{{$v}}\033[0m {{ end }}{{ end }}"

Changing Prometheus job label in scraper for cAdvisor breaks Grafana dashboards

copy iconCopydownload iconDownload
- job_name: 'kubernetes-cadvisor'
  honor_labels: true
  honor_timestamps: true
  scrape_interval: 30s
  scrape_timeout: 10s
  metrics_path: /metrics/cadvisor
  scheme: https
  authorization:
    type: Bearer
    credentials_file: /var/run/secrets/kubernetes.io/serviceaccount/token
  tls_config:
    ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
    insecure_skip_verify: true
  follow_redirects: true
  relabel_configs:
  - source_labels: [job]
    separator: ;
    regex: (.*)
    target_label: __tmp_prometheus_job_name
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name]
    separator: ;
    regex: kubelet
    replacement: $1
    action: keep
  - source_labels: [__meta_kubernetes_service_label_k8s_app]
    separator: ;
    regex: kubelet
    replacement: $1
    action: keep
  - source_labels: [__meta_kubernetes_endpoint_port_name]
    separator: ;
    regex: https-metrics
    replacement: $1
    action: keep
  - source_labels: [__meta_kubernetes_endpoint_address_target_kind, __meta_kubernetes_endpoint_address_target_name]
    separator: ;
    regex: Node;(.*)
    target_label: node
    replacement: ${1}
    action: replace
  - source_labels: [__meta_kubernetes_endpoint_address_target_kind, __meta_kubernetes_endpoint_address_target_name]
    separator: ;
    regex: Pod;(.*)
    target_label: pod
    replacement: ${1}
    action: replace
  - source_labels: [__meta_kubernetes_namespace]
    separator: ;
    regex: (.*)
    target_label: namespace
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_service_name]
    separator: ;
    regex: (.*)
    target_label: service
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_pod_name]
    separator: ;
    regex: (.*)
    target_label: pod
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_pod_container_name]
    separator: ;
    regex: (.*)
    target_label: container
    replacement: $1
    action: replace
  - separator: ;
    regex: (.*)
    target_label: endpoint
    replacement: https-metrics
    action: replace
  - source_labels: [__metrics_path__]
    separator: ;
    regex: (.*)
    target_label: metrics_path
    replacement: $1
    action: replace
  - source_labels: [__address__]
    separator: ;
    regex: (.*)
    modulus: 1
    target_label: __tmp_hash
    replacement: $1
    action: hashmod
  - source_labels: [__tmp_hash]
    separator: ;
    regex: "0"
    replacement: $1
    action: keep
  kubernetes_sd_configs:
  - role: endpoints
    kubeconfig_file: ""
    follow_redirects: true
    namespaces:
      names:
      - kube-system
$ kubectl get prometheusrules prom-1-kube-prometheus-sta-k8s.rules -o yaml
...
  - name: k8s.rules
    rules:
    - expr: |-
        sum by (cluster, namespace, pod, container) (
          irate(container_cpu_usage_seconds_total{job="kubernetes-cadvisor", metrics_path="/metrics/cadvisor", image!=""}[5m])
        ) * on (cluster, namespace, pod) group_left(node) topk by (cluster, namespace, pod) (
          1, max by(cluster, namespace, pod, node) (kube_pod_info{node!=""})
        )
      record: node_namespace_pod_container:container_cpu_usage_seconds_total:sum_irate
...
here we have a few more rules to modify...
-----------------------
- job_name: 'kubernetes-cadvisor'
  honor_labels: true
  honor_timestamps: true
  scrape_interval: 30s
  scrape_timeout: 10s
  metrics_path: /metrics/cadvisor
  scheme: https
  authorization:
    type: Bearer
    credentials_file: /var/run/secrets/kubernetes.io/serviceaccount/token
  tls_config:
    ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
    insecure_skip_verify: true
  follow_redirects: true
  relabel_configs:
  - source_labels: [job]
    separator: ;
    regex: (.*)
    target_label: __tmp_prometheus_job_name
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_service_label_app_kubernetes_io_name]
    separator: ;
    regex: kubelet
    replacement: $1
    action: keep
  - source_labels: [__meta_kubernetes_service_label_k8s_app]
    separator: ;
    regex: kubelet
    replacement: $1
    action: keep
  - source_labels: [__meta_kubernetes_endpoint_port_name]
    separator: ;
    regex: https-metrics
    replacement: $1
    action: keep
  - source_labels: [__meta_kubernetes_endpoint_address_target_kind, __meta_kubernetes_endpoint_address_target_name]
    separator: ;
    regex: Node;(.*)
    target_label: node
    replacement: ${1}
    action: replace
  - source_labels: [__meta_kubernetes_endpoint_address_target_kind, __meta_kubernetes_endpoint_address_target_name]
    separator: ;
    regex: Pod;(.*)
    target_label: pod
    replacement: ${1}
    action: replace
  - source_labels: [__meta_kubernetes_namespace]
    separator: ;
    regex: (.*)
    target_label: namespace
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_service_name]
    separator: ;
    regex: (.*)
    target_label: service
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_pod_name]
    separator: ;
    regex: (.*)
    target_label: pod
    replacement: $1
    action: replace
  - source_labels: [__meta_kubernetes_pod_container_name]
    separator: ;
    regex: (.*)
    target_label: container
    replacement: $1
    action: replace
  - separator: ;
    regex: (.*)
    target_label: endpoint
    replacement: https-metrics
    action: replace
  - source_labels: [__metrics_path__]
    separator: ;
    regex: (.*)
    target_label: metrics_path
    replacement: $1
    action: replace
  - source_labels: [__address__]
    separator: ;
    regex: (.*)
    modulus: 1
    target_label: __tmp_hash
    replacement: $1
    action: hashmod
  - source_labels: [__tmp_hash]
    separator: ;
    regex: "0"
    replacement: $1
    action: keep
  kubernetes_sd_configs:
  - role: endpoints
    kubeconfig_file: ""
    follow_redirects: true
    namespaces:
      names:
      - kube-system
$ kubectl get prometheusrules prom-1-kube-prometheus-sta-k8s.rules -o yaml
...
  - name: k8s.rules
    rules:
    - expr: |-
        sum by (cluster, namespace, pod, container) (
          irate(container_cpu_usage_seconds_total{job="kubernetes-cadvisor", metrics_path="/metrics/cadvisor", image!=""}[5m])
        ) * on (cluster, namespace, pod) group_left(node) topk by (cluster, namespace, pod) (
          1, max by(cluster, namespace, pod, node) (kube_pod_info{node!=""})
        )
      record: node_namespace_pod_container:container_cpu_usage_seconds_total:sum_irate
...
here we have a few more rules to modify...

Community Discussions

Trending Discussions on grafana
  • Remove a part of a log in Loki
  • How can you integrate grafana with Google Cloud SQL
  • Enable use of images from the local library on Kubernetes
  • Understanding the CPU Busy Prometheus query
  • Thanos-Query/Query-Frontend does not show any metrics
  • Add Kubernetes scrape target to Prometheus instance that is NOT in Kubernetes
  • Sucessfully queries the azure monitor service. Workspace not found. While using azuremarket place Grafana
  • Grafana - Is it possible to use variables in Loki-based dashboard query?
  • PostgreSQL Default Result Limit
  • Trigger Beam ParDo at window closing only
Trending Discussions on grafana

QUESTION

Remove a part of a log in Loki

Asked 2022-Mar-21 at 10:18

I have installed Grafana, Loki, Promtail and Prometheus with the grafana/loki-stack.

I also have Nginx set up with the Nginx helm chart.

Promtail is ingesting logs fine into Loki, but I want to customise the way my logs look. Specifically I want to remove a part of the log because it creates errors when trying to parse it with either logfmt or json (Error: LogfmtParserErr and Error: JsonParserErr respectively).

The logs look like this:

2022-02-21T13:41:53.155640208Z stdout F timestamp=2022-02-21T13:41:53+00:00 http_request_method=POST http_response_status_code=200 http_response_time=0.001 http_version=HTTP/2.0 http_request_body_bytes=0 http_request_bytes=63

and I want to remove the part where it says stdout F so the log will look like this:

2022-02-21T13:41:53.155640208Z timestamp=2022-02-21T13:41:53+00:00 http_request_method=POST http_response_status_code=200 http_response_time=0.001 http_version=HTTP/2.0 http_request_body_bytes=0 http_request_bytes=63

I have figured out that on the ingestion side it could be something with Promtail, but ist it also possible to make a LogQL query in Loki to just replace that string? And how would one set up the Promtail configuration for the wanted behaviour?

ANSWER

Answered 2022-Feb-21 at 17:57

Promtail should be configured to replace the string with the replace stage.

Here is a sample config that removes the stdout F part of the log for all logs coming from the namespace ingress.

promtail:
  enabled: true
  pipelineStages:
  - docker: {}
  - match:
      selector: '{namespace="ingress"}'
      stages:
      - replace:
          expression: "(stdout F)"
          replace: ""

Specifically this example works for the grafana/loki-stack chart.

Source https://stackoverflow.com/questions/71210935

Community Discussions, Code Snippets contain sources that include Stack Exchange Network

Vulnerabilities

No vulnerabilities reported

Install grafana

Unsure if Grafana is for you? Watch Grafana in action on play.grafana.org!.
Get Grafana
Installation guides

Support

The Grafana documentation is available at grafana.com/docs.

DOWNLOAD this Library from

Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases

Save this library and start creating your kit

Share this Page

share link
Consider Popular Dashboard Libraries
Compare Dashboard Libraries with Highest Support
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases

Save this library and start creating your kit

  • © 2022 Open Weaver Inc.