kandi background
Explore Kits

VictoriaMetrics | effective monitoring solution and time series database | Monitoring library

 by   VictoriaMetrics Go Version: v1.76.1 License: Apache-2.0

 by   VictoriaMetrics Go Version: v1.76.1 License: Apache-2.0

Download this library from

kandi X-RAY | VictoriaMetrics Summary

VictoriaMetrics is a Go library typically used in Performance Management, Monitoring, Prometheus, Grafana applications. VictoriaMetrics has no bugs, it has no vulnerabilities, it has a Permissive License and it has medium support. You can download it from GitHub.
VictoriaMetrics is a fast, cost-effective and scalable monitoring solution and time series database. VictoriaMetrics is available in binary releases, Docker images, Snap packages and source code. Just download VictoriaMetrics and follow these instructions. Then read Prometheus setup and Grafana setup docs.
Support
Support
Quality
Quality
Security
Security
License
License
Reuse
Reuse

kandi-support Support

  • VictoriaMetrics has a medium active ecosystem.
  • It has 6124 star(s) with 569 fork(s). There are 110 watchers for this library.
  • There were 8 major release(s) in the last 12 months.
  • There are 454 open issues and 1002 have been closed. On average issues are closed in 56 days. There are 12 open pull requests and 0 closed requests.
  • It has a neutral sentiment in the developer community.
  • The latest version of VictoriaMetrics is v1.76.1
VictoriaMetrics Support
Best in #Monitoring
Average in #Monitoring
VictoriaMetrics Support
Best in #Monitoring
Average in #Monitoring

quality kandi Quality

  • VictoriaMetrics has 0 bugs and 0 code smells.
VictoriaMetrics Quality
Best in #Monitoring
Average in #Monitoring
VictoriaMetrics Quality
Best in #Monitoring
Average in #Monitoring

securitySecurity

  • VictoriaMetrics has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
  • VictoriaMetrics code analysis shows 0 unresolved vulnerabilities.
  • There are 0 security hotspots that need review.
VictoriaMetrics Security
Best in #Monitoring
Average in #Monitoring
VictoriaMetrics Security
Best in #Monitoring
Average in #Monitoring

license License

  • VictoriaMetrics is licensed under the Apache-2.0 License. This license is Permissive.
  • Permissive licenses have the least restrictions, and you can use them in most projects.
VictoriaMetrics License
Best in #Monitoring
Average in #Monitoring
VictoriaMetrics License
Best in #Monitoring
Average in #Monitoring

buildReuse

  • VictoriaMetrics releases are available to install and integrate.
  • Installation instructions, examples and code snippets are available.
  • It has 110498 lines of code, 3962 functions and 632 files.
  • It has high code complexity. Code complexity directly impacts maintainability of the code.
VictoriaMetrics Reuse
Best in #Monitoring
Average in #Monitoring
VictoriaMetrics Reuse
Best in #Monitoring
Average in #Monitoring
Top functions reviewed by kandi - BETA

kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample Here

Get all kandi verified functions for this library.

Get all kandi verified functions for this library.

VictoriaMetrics Key Features

It can be used as long-term storage for Prometheus. See these docs for details.

It can be used as drop-in replacement for Prometheus in Grafana, because it supports Prometheus querying API.

It can be used as drop-in replacement for Graphite in Grafana, because it supports Graphite API.

It features easy setup and operation: VictoriaMetrics consists of a single small executable without external dependencies. All the configuration is done via explicit command-line flags with reasonable defaults. All the data is stored in a single directory pointed by -storageDataPath command-line flag. Easy and fast backups from instant snapshots to S3 or GCS can be done with vmbackup / vmrestore tools. See this article for more details.

It implements PromQL-based query language - MetricsQL, which provides improved functionality on top of PromQL.

It provides global query view. Multiple Prometheus instances or any other data sources may ingest data into VictoriaMetrics. Later this data may be queried via a single query.

It provides high performance and good vertical and horizontal scalability for both data ingestion and data querying. It outperforms InfluxDB and TimescaleDB by up to 20x.

It uses 10x less RAM than InfluxDB and up to 7x less RAM than Prometheus, Thanos or Cortex when dealing with millions of unique time series (aka high cardinality).

It is optimized for time series with high churn rate.

It provides high data compression, so up to 70x more data points may be crammed into limited storage comparing to TimescaleDB and up to 7x less storage space is required compared to Prometheus, Thanos or Cortex.

It is optimized for storage with high-latency IO and low IOPS (HDD and network storage in AWS, Google Cloud, Microsoft Azure, etc). See disk IO graphs from these benchmarks.

A single-node VictoriaMetrics may substitute moderately sized clusters built with competing solutions such as Thanos, M3DB, Cortex, InfluxDB or TimescaleDB. See vertical scalability benchmarks, comparing Thanos to VictoriaMetrics cluster and Remote Write Storage Wars talk from PromCon 2019.

It protects the storage from data corruption on unclean shutdown (i.e. OOM, hardware reset or kill -9) thanks to the storage architecture.

It supports metrics' scraping, ingestion and backfilling via the following protocols: Metrics scraping from Prometheus exporters. Prometheus remote write API. Prometheus exposition format. InfluxDB line protocol over HTTP, TCP and UDP. Graphite plaintext protocol with tags. OpenTSDB put message. HTTP OpenTSDB /api/put requests. JSON line format. Arbitrary CSV data. Native binary format.

It supports metrics' relabeling. See these docs for details.

It can deal with high cardinality issues and high churn rate issues via series limiter.

It ideally works with big amounts of time series data from APM, Kubernetes, IoT sensors, connected cars, industrial telemetry, financial data and various Enterprise workloads.

It has open source cluster version.

Configuration with snap package

copy iconCopydownload iconDownload
echo 'FLAGS="-selfScrapeInterval=10s -search.logSlowQueryDuration=20s"' > $SNAP_DATA/var/snap/victoriametrics/current/extra_flags
snap restart victoriametrics

Prometheus setup

copy iconCopydownload iconDownload
remote_write:
  - url: http://<victoriametrics-addr>:8428/api/v1/write

Grafana setup

copy iconCopydownload iconDownload
http://<victoriametrics-addr>:8428

How to send data from DataDog agent

copy iconCopydownload iconDownload
echo '
{
  "series": [
    {
      "host": "test.example.com",
      "interval": 20,
      "metric": "system.load.1",
      "points": [[
        0,
        0.5
      ]],
      "tags": [
        "environment:test"
      ],
      "type": "rate"
    }
  ]
}
' | curl -X POST --data-binary @- http://localhost:8428/datadog/api/v1/series

How to send data from InfluxDB-compatible agents such as

copy iconCopydownload iconDownload
[[outputs.influxdb]]
  urls = ["http://<victoriametrics-addr>:8428"]

How to send data from Graphite-compatible agents such as

copy iconCopydownload iconDownload
/path/to/victoria-metrics-prod -graphiteListenAddr=:2003

Sending data via

copy iconCopydownload iconDownload
/path/to/victoria-metrics-prod -opentsdbListenAddr=:4242

Sending OpenTSDB data via HTTP

copy iconCopydownload iconDownload
/path/to/victoria-metrics-prod -opentsdbHTTPListenAddr=:4242

Building docker images

copy iconCopydownload iconDownload
ROOT_IMAGE=scratch make package-victoria-metrics

How to work with snapshots

copy iconCopydownload iconDownload
{"status":"ok","snapshot":"<snapshot-name>"}

How to export data in JSON line format

copy iconCopydownload iconDownload
{"metric":{"__name__":"up","job":"node_exporter","instance":"localhost:9100"},"values":[0,0,0],"timestamps":[1549891472010,1549891487724,1549891503438]}
{"metric":{"__name__":"up","job":"prometheus","instance":"localhost:9090"},"values":[1,1,1],"timestamps":[1549891461511,1549891476511,1549891491511]}

How to export data in native format

copy iconCopydownload iconDownload
# count unique timeseries in database
wget -O- -q 'http://your_victoriametrics_instance:8428/api/v1/series/count' | jq '.data[0]'

# relaunch victoriametrics with search.maxUniqueTimeseries more than value from previous command

How to import data in JSON line format

copy iconCopydownload iconDownload
# Export the data from <source-victoriametrics>:
curl http://source-victoriametrics:8428/api/v1/export -d 'match={__name__!=""}' > exported_data.jsonl

# Import the data to <destination-victoriametrics>:
curl -X POST http://destination-victoriametrics:8428/api/v1/import -T exported_data.jsonl

How to import data in native format

copy iconCopydownload iconDownload
# Export the data from <source-victoriametrics>:
curl http://source-victoriametrics:8428/api/v1/export/native -d 'match={__name__!=""}' > exported_data.bin

# Import the data to <destination-victoriametrics>:
curl -X POST http://destination-victoriametrics:8428/api/v1/import/native -T exported_data.bin

How to import CSV data

copy iconCopydownload iconDownload
<column_pos>:<type>:<context>

How to import data in Prometheus exposition format

copy iconCopydownload iconDownload
curl -d 'foo{bar="baz"} 123' -X POST 'http://localhost:8428/api/v1/import/prometheus'

Relabeling

copy iconCopydownload iconDownload
# Add {cluster="dev"} label.
- target_label: cluster
  replacement: dev

# Drop the metric (or scrape target) with `{__meta_kubernetes_pod_container_init="true"}` label.
- action: drop
  source_labels: [__meta_kubernetes_pod_container_init]
  regex: true

High availability

copy iconCopydownload iconDownload
/path/to/vmagent -remoteWrite.url=http://<victoriametrics-addr-1>:8428/api/v1/write -remoteWrite.url=http://<victoriametrics-addr-2>:8428/api/v1/write

Tuning

copy iconCopydownload iconDownload
mkfs.ext4 ... -O 64bit,huge_file,extent -T huge

Profiling

copy iconCopydownload iconDownload
curl http://0.0.0.0:8428/debug/pprof/heap > mem.pprof

List of command-line flags

copy iconCopydownload iconDownload
  -bigMergeConcurrency int
    	The maximum number of CPU cores to use for big merges. Default value is used if set to 0
  -configAuthKey string
    	Authorization key for accessing /config page. It must be passed via authKey query arg
  -csvTrimTimestamp duration
    	Trim timestamps when importing csv data to this duration. Minimum practical duration is 1ms. Higher duration (i.e. 1s) may be used for reducing disk space usage for timestamp data (default 1ms)
  -datadog.maxInsertRequestSize size
    	The maximum size in bytes of a single DataDog POST request to /api/v1/series
    	Supports the following optional suffixes for size values: KB, MB, GB, KiB, MiB, GiB (default 67108864)
  -dedup.minScrapeInterval duration
    	Leave only the first sample in every time series per each discrete interval equal to -dedup.minScrapeInterval > 0. See https://docs.victoriametrics.com/#deduplication and https://docs.victoriametrics.com/#downsampling
  -deleteAuthKey string
    	authKey for metrics' deletion via /api/v1/admin/tsdb/delete_series and /tags/delSeries
  -denyQueriesOutsideRetention
    	Whether to deny queries outside of the configured -retentionPeriod. When set, then /api/v1/query_range would return '503 Service Unavailable' error for queries with 'from' value outside -retentionPeriod. This may be useful when multiple data sources with distinct retentions are hidden behind query-tee
  -downsampling.period array
    	Comma-separated downsampling periods in the format 'offset:period'. For example, '30d:10m' instructs to leave a single sample per 10 minutes for samples older than 30 days. See https://docs.victoriametrics.com/#downsampling for details
    	Supports an array of values separated by comma or specified via multiple flags.
  -dryRun
    	Whether to check only -promscrape.config and then exit. Unknown config entries aren't allowed in -promscrape.config by default. This can be changed with -promscrape.config.strictParse=false command-line flag
  -enableTCP6
    	Whether to enable IPv6 for listening and dialing. By default only IPv4 TCP and UDP is used
  -envflag.enable
    	Whether to enable reading flags from environment variables additionally to command line. Command line flag values have priority over values from environment vars. Flags are read only from command line if this flag isn't set. See https://docs.victoriametrics.com/#environment-variables for more details
  -envflag.prefix string
    	Prefix for environment variables if -envflag.enable is set
  -eula
    	By specifying this flag, you confirm that you have an enterprise license and accept the EULA https://victoriametrics.com/assets/VM_EULA.pdf
  -finalMergeDelay duration
    	The delay before starting final merge for per-month partition after no new data is ingested into it. Final merge may require additional disk IO and CPU resources. Final merge may increase query speed and reduce disk space usage in some cases. Zero value disables final merge
  -forceFlushAuthKey string
    	authKey, which must be passed in query string to /internal/force_flush pages
  -forceMergeAuthKey string
    	authKey, which must be passed in query string to /internal/force_merge pages
  -fs.disableMmap
    	Whether to use pread() instead of mmap() for reading data files. By default mmap() is used for 64-bit arches and pread() is used for 32-bit arches, since they cannot read data files bigger than 2^32 bytes in memory. mmap() is usually faster for reading small data chunks than pread()
  -graphiteListenAddr string
    	TCP and UDP address to listen for Graphite plaintext data. Usually :2003 must be set. Doesn't work if empty
  -graphiteTrimTimestamp duration
    	Trim timestamps for Graphite data to this duration. Minimum practical duration is 1s. Higher duration (i.e. 1m) may be used for reducing disk space usage for timestamp data (default 1s)
  -http.connTimeout duration
    	Incoming http connections are closed after the configured timeout. This may help to spread the incoming load among a cluster of services behind a load balancer. Please note that the real timeout may be bigger by up to 10% as a protection against the thundering herd problem (default 2m0s)
  -http.disableResponseCompression
    	Disable compression of HTTP responses to save CPU resources. By default compression is enabled to save network bandwidth
  -http.idleConnTimeout duration
    	Timeout for incoming idle http connections (default 1m0s)
  -http.maxGracefulShutdownDuration duration
    	The maximum duration for a graceful shutdown of the HTTP server. A highly loaded server may require increased value for a graceful shutdown (default 7s)
  -http.pathPrefix string
    	An optional prefix to add to all the paths handled by http server. For example, if '-http.pathPrefix=/foo/bar' is set, then all the http requests will be handled on '/foo/bar/*' paths. This may be useful for proxied requests. See https://www.robustperception.io/using-external-urls-and-proxies-with-prometheus
  -http.shutdownDelay duration
    	Optional delay before http server shutdown. During this delay, the server returns non-OK responses from /health page, so load balancers can route new requests to other servers
  -httpAuth.password string
    	Password for HTTP Basic Auth. The authentication is disabled if -httpAuth.username is empty
  -httpAuth.username string
    	Username for HTTP Basic Auth. The authentication is disabled if empty. See also -httpAuth.password
  -httpListenAddr string
    	TCP address to listen for http connections (default ":8428")
  -import.maxLineLen size
    	The maximum length in bytes of a single line accepted by /api/v1/import; the line length can be limited with 'max_rows_per_line' query arg passed to /api/v1/export
    	Supports the following optional suffixes for size values: KB, MB, GB, KiB, MiB, GiB (default 104857600)
  -influx.databaseNames array
    	Comma-separated list of database names to return from /query and /influx/query API. This can be needed for accepting data from Telegraf plugins such as https://github.com/fangli/fluent-plugin-influxdb
    	Supports an array of values separated by comma or specified via multiple flags.
  -influx.maxLineSize size
    	The maximum size in bytes for a single InfluxDB line during parsing
    	Supports the following optional suffixes for size values: KB, MB, GB, KiB, MiB, GiB (default 262144)
  -influxDBLabel string
    	Default label for the DB name sent over '?db={db_name}' query parameter (default "db")
  -influxListenAddr string
    	TCP and UDP address to listen for InfluxDB line protocol data. Usually :8189 must be set. Doesn't work if empty. This flag isn't needed when ingesting data over HTTP - just send it to http://<victoriametrics>:8428/write
  -influxMeasurementFieldSeparator string
    	Separator for '{measurement}{separator}{field_name}' metric name when inserted via InfluxDB line protocol (default "_")
  -influxSkipMeasurement
    	Uses '{field_name}' as a metric name while ignoring '{measurement}' and '-influxMeasurementFieldSeparator'
  -influxSkipSingleField
    	Uses '{measurement}' instead of '{measurement}{separator}{field_name}' for metic name if InfluxDB line contains only a single field
  -influxTrimTimestamp duration
    	Trim timestamps for InfluxDB line protocol data to this duration. Minimum practical duration is 1ms. Higher duration (i.e. 1s) may be used for reducing disk space usage for timestamp data (default 1ms)
  -insert.maxQueueDuration duration
    	The maximum duration for waiting in the queue for insert requests due to -maxConcurrentInserts (default 1m0s)
  -logNewSeries
    	Whether to log new series. This option is for debug purposes only. It can lead to performance issues when big number of new series are ingested into VictoriaMetrics
  -loggerDisableTimestamps
    	Whether to disable writing timestamps in logs
  -loggerErrorsPerSecondLimit int
    	Per-second limit on the number of ERROR messages. If more than the given number of errors are emitted per second, the remaining errors are suppressed. Zero values disable the rate limit
  -loggerFormat string
    	Format for logs. Possible values: default, json (default "default")
  -loggerLevel string
    	Minimum level of errors to log. Possible values: INFO, WARN, ERROR, FATAL, PANIC (default "INFO")
  -loggerOutput string
    	Output for the logs. Supported values: stderr, stdout (default "stderr")
  -loggerTimezone string
    	Timezone to use for timestamps in logs. Timezone must be a valid IANA Time Zone. For example: America/New_York, Europe/Berlin, Etc/GMT+3 or Local (default "UTC")
  -loggerWarnsPerSecondLimit int
    	Per-second limit on the number of WARN messages. If more than the given number of warns are emitted per second, then the remaining warns are suppressed. Zero values disable the rate limit
  -maxConcurrentInserts int
    	The maximum number of concurrent inserts. Default value should work for most cases, since it minimizes the overhead for concurrent inserts. This option is tigthly coupled with -insert.maxQueueDuration (default 16)
  -maxInsertRequestSize size
    	The maximum size in bytes of a single Prometheus remote_write API request
    	Supports the following optional suffixes for size values: KB, MB, GB, KiB, MiB, GiB (default 33554432)
  -maxLabelValueLen int
    	The maximum length of label values in the accepted time series. Longer label values are truncated. In this case the vm_too_long_label_values_total metric at /metrics page is incremented (default 16384)
  -maxLabelsPerTimeseries int
    	The maximum number of labels accepted per time series. Superfluous labels are dropped. In this case the vm_metrics_with_dropped_labels_total metric at /metrics page is incremented (default 30)
  -memory.allowedBytes size
    	Allowed size of system memory VictoriaMetrics caches may occupy. This option overrides -memory.allowedPercent if set to a non-zero value. Too low a value may increase the cache miss rate usually resulting in higher CPU and disk IO usage. Too high a value may evict too much data from OS page cache resulting in higher disk IO usage
    	Supports the following optional suffixes for size values: KB, MB, GB, KiB, MiB, GiB (default 0)
  -memory.allowedPercent float
    	Allowed percent of system memory VictoriaMetrics caches may occupy. See also -memory.allowedBytes. Too low a value may increase cache miss rate usually resulting in higher CPU and disk IO usage. Too high a value may evict too much data from OS page cache which will result in higher disk IO usage (default 60)
  -metricsAuthKey string
    	Auth key for /metrics. It must be passed via authKey query arg. It overrides httpAuth.* settings
  -opentsdbHTTPListenAddr string
    	TCP address to listen for OpentTSDB HTTP put requests. Usually :4242 must be set. Doesn't work if empty
  -opentsdbListenAddr string
    	TCP and UDP address to listen for OpentTSDB metrics. Telnet put messages and HTTP /api/put messages are simultaneously served on TCP port. Usually :4242 must be set. Doesn't work if empty
  -opentsdbTrimTimestamp duration
    	Trim timestamps for OpenTSDB 'telnet put' data to this duration. Minimum practical duration is 1s. Higher duration (i.e. 1m) may be used for reducing disk space usage for timestamp data (default 1s)
  -opentsdbhttp.maxInsertRequestSize size
    	The maximum size of OpenTSDB HTTP put request
    	Supports the following optional suffixes for size values: KB, MB, GB, KiB, MiB, GiB (default 33554432)
  -opentsdbhttpTrimTimestamp duration
    	Trim timestamps for OpenTSDB HTTP data to this duration. Minimum practical duration is 1ms. Higher duration (i.e. 1s) may be used for reducing disk space usage for timestamp data (default 1ms)
  -pprofAuthKey string
    	Auth key for /debug/pprof. It must be passed via authKey query arg. It overrides httpAuth.* settings
  -precisionBits int
    	The number of precision bits to store per each value. Lower precision bits improves data compression at the cost of precision loss (default 64)
  -promscrape.cluster.memberNum int
    	The number of number in the cluster of scrapers. It must be an unique value in the range 0 ... promscrape.cluster.membersCount-1 across scrapers in the cluster
  -promscrape.cluster.membersCount int
    	The number of members in a cluster of scrapers. Each member must have an unique -promscrape.cluster.memberNum in the range 0 ... promscrape.cluster.membersCount-1 . Each member then scrapes roughly 1/N of all the targets. By default cluster scraping is disabled, i.e. a single scraper scrapes all the targets
  -promscrape.cluster.replicationFactor int
    	The number of members in the cluster, which scrape the same targets. If the replication factor is greater than 2, then the deduplication must be enabled at remote storage side. See https://docs.victoriametrics.com/#deduplication (default 1)
  -promscrape.config string
    	Optional path to Prometheus config file with 'scrape_configs' section containing targets to scrape. The path can point to local file and to http url. See https://docs.victoriametrics.com/#how-to-scrape-prometheus-exporters-such-as-node-exporter for details
  -promscrape.config.dryRun
    	Checks -promscrape.config file for errors and unsupported fields and then exits. Returns non-zero exit code on parsing errors and emits these errors to stderr. See also -promscrape.config.strictParse command-line flag. Pass -loggerLevel=ERROR if you don't need to see info messages in the output.
  -promscrape.config.strictParse
    	Whether to deny unsupported fields in -promscrape.config . Set to false in order to silently skip unsupported fields (default true)
  -promscrape.configCheckInterval duration
    	Interval for checking for changes in '-promscrape.config' file. By default the checking is disabled. Send SIGHUP signal in order to force config check for changes
  -promscrape.consul.waitTime duration
    	Wait time used by Consul service discovery. Default value is used if not set
  -promscrape.consulSDCheckInterval duration
    	Interval for checking for changes in Consul. This works only if consul_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#consul_sd_config for details (default 30s)
  -promscrape.digitaloceanSDCheckInterval duration
    	Interval for checking for changes in digital ocean. This works only if digitalocean_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#digitalocean_sd_config for details (default 1m0s)
  -promscrape.disableCompression
    	Whether to disable sending 'Accept-Encoding: gzip' request headers to all the scrape targets. This may reduce CPU usage on scrape targets at the cost of higher network bandwidth utilization. It is possible to set 'disable_compression: true' individually per each 'scrape_config' section in '-promscrape.config' for fine grained control
  -promscrape.disableKeepAlive
    	Whether to disable HTTP keep-alive connections when scraping all the targets. This may be useful when targets has no support for HTTP keep-alive connection. It is possible to set 'disable_keepalive: true' individually per each 'scrape_config' section in '-promscrape.config' for fine grained control. Note that disabling HTTP keep-alive may increase load on both vmagent and scrape targets
  -promscrape.discovery.concurrency int
    	The maximum number of concurrent requests to Prometheus autodiscovery API (Consul, Kubernetes, etc.) (default 100)
  -promscrape.discovery.concurrentWaitTime duration
    	The maximum duration for waiting to perform API requests if more than -promscrape.discovery.concurrency requests are simultaneously performed (default 1m0s)
  -promscrape.dnsSDCheckInterval duration
    	Interval for checking for changes in dns. This works only if dns_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#dns_sd_config for details (default 30s)
  -promscrape.dockerSDCheckInterval duration
    	Interval for checking for changes in docker. This works only if docker_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#docker_sd_config for details (default 30s)
  -promscrape.dockerswarmSDCheckInterval duration
    	Interval for checking for changes in dockerswarm. This works only if dockerswarm_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#dockerswarm_sd_config for details (default 30s)
  -promscrape.dropOriginalLabels
    	Whether to drop original labels for scrape targets at /targets and /api/v1/targets pages. This may be needed for reducing memory usage when original labels for big number of scrape targets occupy big amounts of memory. Note that this reduces debuggability for improper per-target relabeling configs
  -promscrape.ec2SDCheckInterval duration
    	Interval for checking for changes in ec2. This works only if ec2_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#ec2_sd_config for details (default 1m0s)
  -promscrape.eurekaSDCheckInterval duration
    	Interval for checking for changes in eureka. This works only if eureka_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#eureka_sd_config for details (default 30s)
  -promscrape.fileSDCheckInterval duration
    	Interval for checking for changes in 'file_sd_config'. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#file_sd_config for details (default 5m0s)
  -promscrape.gceSDCheckInterval duration
    	Interval for checking for changes in gce. This works only if gce_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#gce_sd_config for details (default 1m0s)
  -promscrape.httpSDCheckInterval duration
    	Interval for checking for changes in http endpoint service discovery. This works only if http_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#http_sd_config for details (default 1m0s)
  -promscrape.kubernetes.apiServerTimeout duration
    	How frequently to reload the full state from Kuberntes API server (default 30m0s)
  -promscrape.kubernetesSDCheckInterval duration
    	Interval for checking for changes in Kubernetes API server. This works only if kubernetes_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#kubernetes_sd_config for details (default 30s)
  -promscrape.maxDroppedTargets int
    	The maximum number of droppedTargets to show at /api/v1/targets page. Increase this value if your setup drops more scrape targets during relabeling and you need investigating labels for all the dropped targets. Note that the increased number of tracked dropped targets may result in increased memory usage (default 1000)
  -promscrape.maxResponseHeadersSize size
    	The maximum size of http response headers from Prometheus scrape targets
    	Supports the following optional suffixes for size values: KB, MB, GB, KiB, MiB, GiB (default 4096)
  -promscrape.maxScrapeSize size
    	The maximum size of scrape response in bytes to process from Prometheus targets. Bigger responses are rejected
    	Supports the following optional suffixes for size values: KB, MB, GB, KiB, MiB, GiB (default 16777216)
  -promscrape.minResponseSizeForStreamParse size
    	The minimum target response size for automatic switching to stream parsing mode, which can reduce memory usage. See https://docs.victoriametrics.com/vmagent.html#stream-parsing-mode
    	Supports the following optional suffixes for size values: KB, MB, GB, KiB, MiB, GiB (default 1000000)
  -promscrape.noStaleMarkers
    	Whether to disable sending Prometheus stale markers for metrics when scrape target disappears. This option may reduce memory usage if stale markers aren't needed for your setup. This option also disables populating the scrape_series_added metric. See https://prometheus.io/docs/concepts/jobs_instances/#automatically-generated-labels-and-time-series
  -promscrape.openstackSDCheckInterval duration
    	Interval for checking for changes in openstack API server. This works only if openstack_sd_configs is configured in '-promscrape.config' file. See https://prometheus.io/docs/prometheus/latest/configuration/configuration/#openstack_sd_config for details (default 30s)
  -promscrape.seriesLimitPerTarget int
    	Optional limit on the number of unique time series a single scrape target can expose. See https://docs.victoriametrics.com/vmagent.html#cardinality-limiter for more info
  -promscrape.streamParse
    	Whether to enable stream parsing for metrics obtained from scrape targets. This may be useful for reducing memory usage when millions of metrics are exposed per each scrape target. It is posible to set 'stream_parse: true' individually per each 'scrape_config' section in '-promscrape.config' for fine grained control
  -promscrape.suppressDuplicateScrapeTargetErrors
    	Whether to suppress 'duplicate scrape target' errors; see https://docs.victoriametrics.com/vmagent.html#troubleshooting for details
  -promscrape.suppressScrapeErrors
    	Whether to suppress scrape errors logging. The last error for each target is always available at '/targets' page even if scrape errors logging is suppressed
  -relabelConfig string
    	Optional path to a file with relabeling rules, which are applied to all the ingested metrics. The path can point either to local file or to http url. See https://docs.victoriametrics.com/#relabeling for details. The config is reloaded on SIGHUP signal
  -relabelDebug
    	Whether to log metrics before and after relabeling with -relabelConfig. If the -relabelDebug is enabled, then the metrics aren't sent to storage. This is useful for debugging the relabeling configs
  -retentionPeriod value
    	Data with timestamps outside the retentionPeriod is automatically deleted
    	The following optional suffixes are supported: h (hour), d (day), w (week), y (year). If suffix isn't set, then the duration is counted in months (default 1)
  -search.cacheTimestampOffset duration
    	The maximum duration since the current time for response data, which is always queried from the original raw data, without using the response cache. Increase this value if you see gaps in responses due to time synchronization issues between VictoriaMetrics and data sources. See also -search.disableAutoCacheReset (default 5m0s)
  -search.disableAutoCacheReset
    	Whether to disable automatic response cache reset if a sample with timestamp outside -search.cacheTimestampOffset is inserted into VictoriaMetrics
  -search.disableCache
    	Whether to disable response caching. This may be useful during data backfilling
  -search.graphiteMaxPointsPerSeries int
    	The maximum number of points per series Graphite render API can return (default 1000000)
  -search.graphiteStorageStep duration
    	The interval between datapoints stored in the database. It is used at Graphite Render API handler for normalizing the interval between datapoints in case it isn't normalized. It can be overriden by sending 'storage_step' query arg to /render API or by sending the desired interval via 'Storage-Step' http header during querying /render API (default 10s)
  -search.latencyOffset duration
    	The time when data points become visible in query results after the collection. Too small value can result in incomplete last points for query results (default 30s)
  -search.logSlowQueryDuration duration
    	Log queries with execution time exceeding this value. Zero disables slow query logging (default 5s)
  -search.maxConcurrentRequests int
    	The maximum number of concurrent search requests. It shouldn't be high, since a single request can saturate all the CPU cores. See also -search.maxQueueDuration (default 8)
  -search.maxExportDuration duration
    	The maximum duration for /api/v1/export call (default 720h0m0s)
  -search.maxLookback duration
    	Synonym to -search.lookback-delta from Prometheus. The value is dynamically detected from interval between time series datapoints if not set. It can be overridden on per-query basis via max_lookback arg. See also '-search.maxStalenessInterval' flag, which has the same meaining due to historical reasons
  -search.maxPointsPerTimeseries int
    	The maximum points per a single timeseries returned from /api/v1/query_range. This option doesn't limit the number of scanned raw samples in the database. The main purpose of this option is to limit the number of per-series points returned to graphing UI such as Grafana. There is no sense in setting this limit to values bigger than the horizontal resolution of the graph (default 30000)
  -search.maxQueryDuration duration
    	The maximum duration for query execution (default 30s)
  -search.maxQueryLen size
    	The maximum search query length in bytes
    	Supports the following optional suffixes for size values: KB, MB, GB, KiB, MiB, GiB (default 16384)
  -search.maxQueueDuration duration
    	The maximum time the request waits for execution when -search.maxConcurrentRequests limit is reached; see also -search.maxQueryDuration (default 10s)
  -search.maxSamplesPerQuery int
    	The maximum number of raw samples a single query can process across all time series. This protects from heavy queries, which select unexpectedly high number of raw samples. See also -search.maxSamplesPerSeries (default 1000000000)
  -search.maxSamplesPerSeries int
    	The maximum number of raw samples a single query can scan per each time series. This option allows limiting memory usage (default 30000000)
  -search.maxStalenessInterval duration
    	The maximum interval for staleness calculations. By default it is automatically calculated from the median interval between samples. This flag could be useful for tuning Prometheus data model closer to Influx-style data model. See https://prometheus.io/docs/prometheus/latest/querying/basics/#staleness for details. See also '-search.maxLookback' flag, which has the same meaning due to historical reasons
  -search.maxStatusRequestDuration duration
    	The maximum duration for /api/v1/status/* requests (default 5m0s)
  -search.maxStepForPointsAdjustment duration
    	The maximum step when /api/v1/query_range handler adjusts points with timestamps closer than -search.latencyOffset to the current time. The adjustment is needed because such points may contain incomplete data (default 1m0s)
  -search.maxTagKeys int
    	The maximum number of tag keys returned from /api/v1/labels (default 100000)
  -search.maxTagValueSuffixesPerSearch int
    	The maximum number of tag value suffixes returned from /metrics/find (default 100000)
  -search.maxTagValues int
    	The maximum number of tag values returned from /api/v1/label/<label_name>/values (default 100000)
  -search.maxUniqueTimeseries int
    	The maximum number of unique time series each search can scan. This option allows limiting memory usage (default 300000)
  -search.minStalenessInterval duration
    	The minimum interval for staleness calculations. This flag could be useful for removing gaps on graphs generated from time series with irregular intervals between samples. See also '-search.maxStalenessInterval'
  -search.noStaleMarkers
    	Set this flag to true if the database doesn't contain Prometheus stale markers, so there is no need in spending additional CPU time on its handling. Staleness markers may exist only in data obtained from Prometheus scrape targets
  -search.queryStats.lastQueriesCount int
    	Query stats for /api/v1/status/top_queries is tracked on this number of last queries. Zero value disables query stats tracking (default 20000)
  -search.queryStats.minQueryDuration duration
    	The minimum duration for queries to track in query stats at /api/v1/status/top_queries. Queries with lower duration are ignored in query stats (default 1ms)
  -search.resetCacheAuthKey string
    	Optional authKey for resetting rollup cache via /internal/resetRollupResultCache call
  -search.treatDotsAsIsInRegexps
    	Whether to treat dots as is in regexp label filters used in queries. For example, foo{bar=~"a.b.c"} will be automatically converted to foo{bar=~"a\\.b\\.c"}, i.e. all the dots in regexp filters will be automatically escaped in order to match only dot char instead of matching any char. Dots in ".+", ".*" and ".{n}" regexps aren't escaped. This option is DEPRECATED in favor of {__graphite__="a.*.c"} syntax for selecting metrics matching the given Graphite metrics filter
  -selfScrapeInstance string
    	Value for 'instance' label, which is added to self-scraped metrics (default "self")
  -selfScrapeInterval duration
    	Interval for self-scraping own metrics at /metrics page
  -selfScrapeJob string
    	Value for 'job' label, which is added to self-scraped metrics (default "victoria-metrics")
  -smallMergeConcurrency int
    	The maximum number of CPU cores to use for small merges. Default value is used if set to 0
  -snapshotAuthKey string
    	authKey, which must be passed in query string to /snapshot* pages
  -sortLabels
    	Whether to sort labels for incoming samples before writing them to storage. This may be needed for reducing memory usage at storage when the order of labels in incoming samples is random. For example, if m{k1="v1",k2="v2"} may be sent as m{k2="v2",k1="v1"}. Enabled sorting for labels can slow down ingestion performance a bit
  -storage.cacheSizeIndexDBDataBlocks size
    	Overrides max size for indexdb/dataBlocks cache. See https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#cache-tuning
    	Supports the following optional suffixes for size values: KB, MB, GB, KiB, MiB, GiB (default 0)
  -storage.cacheSizeIndexDBIndexBlocks size
    	Overrides max size for indexdb/indexBlocks cache. See https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#cache-tuning
    	Supports the following optional suffixes for size values: KB, MB, GB, KiB, MiB, GiB (default 0)
  -storage.cacheSizeStorageTSID size
    	Overrides max size for storage/tsid cache. See https://docs.victoriametrics.com/Single-server-VictoriaMetrics.html#cache-tuning
    	Supports the following optional suffixes for size values: KB, MB, GB, KiB, MiB, GiB (default 0)
  -storage.maxDailySeries int
    	The maximum number of unique series can be added to the storage during the last 24 hours. Excess series are logged and dropped. This can be useful for limiting series churn rate. See also -storage.maxHourlySeries
  -storage.maxHourlySeries int
    	The maximum number of unique series can be added to the storage during the last hour. Excess series are logged and dropped. This can be useful for limiting series cardinality. See also -storage.maxDailySeries
  -storage.minFreeDiskSpaceBytes size
    	The minimum free disk space at -storageDataPath after which the storage stops accepting new data
    	Supports the following optional suffixes for size values: KB, MB, GB, KiB, MiB, GiB (default 10000000)
  -storageDataPath string
    	Path to storage data (default "victoria-metrics-data")
  -tls
    	Whether to enable TLS (aka HTTPS) for incoming requests. -tlsCertFile and -tlsKeyFile must be set if -tls is set
  -tlsCertFile string
    	Path to file with TLS certificate. Used only if -tls is set. Prefer ECDSA certs instead of RSA certs as RSA certs are slower. The provided certificate file is automatically re-read every second, so it can be dynamically updated
  -tlsKeyFile string
    	Path to file with TLS key. Used only if -tls is set. The provided key file is automatically re-read every second, so it can be dynamically updated
  -version
    	Show VictoriaMetrics version

Amazon EKS (NFS) to Kubernetes pod. Can't mount volume

copy iconCopydownload iconDownload
kind: Deployment
apiVersion: apps/v1
metadata:
  name: victoriametrics
...
  volumes:
  - name: victoriametrics-data
      persistentVolumeClaim:
        claimName: <value of local.name_persistent_volume_claim>

Linear interpolation in PromQL or MetricsQL

copy iconCopydownload iconDownload
metric default predict_linear(metric[1h], 0)

Community Discussions

Trending Discussions on VictoriaMetrics
  • How do I pass a command line argument to a service container?
  • AWS EKS EFS mounted volume. In spite 21Gi in claimed volume the pod has 8E (full possible size of EFS)
  • Amazon EKS (NFS) to Kubernetes pod. Can't mount volume
  • MetricQL function inside the PromQL
  • Does VictoriaMetrics have some way to store string value instead float64?
  • Docker: Can't scrape SonarQube with VictoriaMetrics &quot;vmagent&quot;, connection refused
  • Linear interpolation in PromQL or MetricsQL
Trending Discussions on VictoriaMetrics

QUESTION

How do I pass a command line argument to a service container?

Asked 2022-Apr-04 at 19:08

I am trying to set up a bitbucket pipeline that uses a database service provided by a docker container. However, in order to get the database service started correctly, I need to pass an argument to be received by the database container's ENTRYPOINT. I see from the pipeline service doc that it's possible to send variables to the service's docker container, but the option I need to set isn't settable by an environment variable, only by a command line argument.

When I run the database's docker image locally using docker run, I am able to set the option just by adding it to the end of the docker run command, and it gets correctly applied to the container's ENTRYPOINT, so it seems like this should be straightforward, I just can't figure out where to put the argument in bitbucket-pipelines.yml.

Below is my bitbucket-pipelines.yml. Everything about it works great except that I need a way to pass a command line argument to the victoria-metrics container at the end of the file.

image: node:14.16.1
pipelines:
  default:
    - step:
        caches:
          - node
        script:
          - npm install
          - npm test
        services:
          - mongo
          - victoriaMetrics

definitions:
  services:
    mongo:
      image: mongo:3.6
    victoriaMetrics:
      image: victoriametrics/victoria-metrics:v1.75.1

ANSWER

Answered 2022-Apr-04 at 19:08

According to Mark C from Atlassian, there is presently no way to pass command line arguments to service containers. However, he has created a feature request for this capability, which you are welcome to vote for if interested.

In the meantime, the suggested workarounds are:

  1. You can start the service container by running a Docker command within Pipelines as long as the command is not restricted. You can check this link for more information about Docker restricted commands on Pipelines.
  2. You can create your own Docker image (using Dockerfile) and upload it to Docker Hub then use that image as a service container on Pipelines

Source https://stackoverflow.com/questions/71695070

Community Discussions, Code Snippets contain sources that include Stack Exchange Network

Vulnerabilities

No vulnerabilities reported

Install VictoriaMetrics

Add the following lines to Prometheus config file (it is usually located at /etc/prometheus/prometheus.yml) in order to send data to VictoriaMetrics:.
Create Prometheus datasource in Grafana with the following url:. Substitute <victoriametrics-addr> with the hostname or IP address of VictoriaMetrics. Then build graphs and dashboards for the created datasource using PromQL or MetricsQL.
It is safe upgrading VictoriaMetrics to new versions unless release notes say otherwise. It is safe skipping multiple versions during the upgrade unless release notes say otherwise. It is recommended performing regular upgrades to the latest version, since it may contain important bug fixes, performance optimizations or new features. It is also safe downgrading to older versions unless release notes say otherwise.
Send SIGINT signal to VictoriaMetrics process in order to gracefully stop it.
Wait until the process stops. This can take a few seconds.
Start the upgraded VictoriaMetrics.
We recommend using either binary releases or docker images instead of building VictoriaMetrics from sources. Building from sources is reasonable when developing additional features specific to your needs or when testing bugfixes.
Install Go. The minimum supported version is Go 1.17.
Run make victoria-metrics from the root folder of the repository. It builds victoria-metrics binary and puts it into the bin folder.
Install docker.
Run make victoria-metrics-prod from the root folder of the repository. It builds victoria-metrics-prod binary and puts it into the bin folder.
ARM build may run on Raspberry Pi or on energy-efficient ARM servers.
Install Go. The minimum supported version is Go 1.17.
Run make victoria-metrics-arm or make victoria-metrics-arm64 from the root folder of the repository. It builds victoria-metrics-arm or victoria-metrics-arm64 binary respectively and puts it into the bin folder.
Install docker.
Run make victoria-metrics-arm-prod or make victoria-metrics-arm64-prod from the root folder of the repository. It builds victoria-metrics-arm-prod or victoria-metrics-arm64-prod binary respectively and puts it into the bin folder.
Pure Go mode builds only Go code without cgo dependencies.
Install Go. The minimum supported version is Go 1.17.
Run make victoria-metrics-pure from the root folder of the repository. It builds victoria-metrics-pure binary and puts it into the bin folder.
Read these instructions on how to set up VictoriaMetrics as a service in your OS. There is also snap package for Ubuntu.

Support

It is recommended to use default command-line flag values (i.e. don't set them explicitly) until the need of tweaking these flag values arises. It is recommended inspecting logs during troubleshooting, since they may contain useful information.

DOWNLOAD this Library from

Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases
Explore Kits

Save this library and start creating your kit

Share this Page

share link
Reuse Pre-built Kits with VictoriaMetrics
Consider Popular Monitoring Libraries
Try Top Libraries by VictoriaMetrics
Compare Monitoring Libraries with Highest Support
Compare Monitoring Libraries with Highest Quality
Compare Monitoring Libraries with Highest Security
Compare Monitoring Libraries with Permissive License
Compare Monitoring Libraries with Highest Reuse
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases
Explore Kits

Save this library and start creating your kit

  • © 2022 Open Weaver Inc.