rrdtool | SVN-import of trunk from svn.oetiker.ch | Version Control System library

 by   octo C Version: Current License: Non-SPDX

kandi X-RAY | rrdtool Summary

kandi X-RAY | rrdtool Summary

rrdtool is a C library typically used in Devops, Version Control System, Angular applications. rrdtool has no bugs, it has no vulnerabilities and it has low support. However rrdtool has a Non-SPDX License. You can download it from GitHub.

It is pretty easy to gather status information from all sorts of things, ranging from the temperature in your office to the number of octets which have passed through the FDDI interface of your router. But it is not so trivial to store this data in a efficient and systematic manner. This is where RRDtool kicks in. It lets you log and analyze the data you gather from all kinds of data-sources (DS). The data analysis part of RRDtool is based on the ability to quickly generate graphical representations of the data values collected over a definable time period.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              rrdtool has a low active ecosystem.
              It has 18 star(s) with 14 fork(s). There are 5 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              rrdtool has no issues reported. There are 1 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of rrdtool is current.

            kandi-Quality Quality

              rrdtool has 0 bugs and 0 code smells.

            kandi-Security Security

              rrdtool has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              rrdtool code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              rrdtool has a Non-SPDX License.
              Non-SPDX licenses can be open source with a non SPDX compliant license, or non open source licenses, and you need to review them closely before use.

            kandi-Reuse Reuse

              rrdtool releases are not available. You will need to build from source code and install.
              It has 421 lines of code, 0 functions and 9 files.
              It has low code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of rrdtool
            Get all kandi verified functions for this library.

            rrdtool Key Features

            No Key Features are available at this moment for rrdtool.

            rrdtool Examples and Code Snippets

            No Code Snippets are available at this moment for rrdtool.

            Community Discussions

            QUESTION

            rrd_fetch does not contain values from last update. why?
            Asked 2022-Mar-07 at 23:19

            I am storing information from temperature sensors in a rrd. I want to retrieve the values from a given timespan in php.

            For data from the last 2 hours, I use this:

            ...

            ANSWER

            Answered 2022-Mar-07 at 23:19

            This is because lastupdate() and fetch() do different things.

            When you add data to an RRD, there is a multi-stage process going on in the background. First, the data are turned to Rates; then Normalised; then Consolodated.

            1. Data submitted update()

            2. Data are converted to a Rate. If your DS is of type gauge then this is a noop, but other types will be converted based on the time of the previous update.

            3. Now we get the value returned by lastupdate(), which is this DP (datapoint)

            4. Data are Normalised. This means that we assume a linear progression from the last known datapoint, and calculate the average over the last step Interval (300s in your example above). This gives a new PDP (primary datapoint) on a time window boundary (IE with the PDP time being a multiple of the Interval)

            5. The PDP are now Consolodated. When sufficient have been collected,a new CDP (Consolodated Datapoint) is written to the RRA, depending on the CF and XFF. If your RRA has 1cdp=1pdp (as in your case) then the CDP should be the same as the PDP.

            6. Now we get the CDP in the RRA, which is what fetch() returns.

            From this, you can see that the value you store is not the same as the lastupdate() (due to Normalisation and Rate); and this is not the same as the fetch() (due to Consolodation).

            In addition, you won't see the value appearing in the lastupdate() until the time window has completed; and you wont see it appearing in fetch() until the RRA CDP is completed.

            Since your fetch() is not asking for more data than in your 1cdp=1pdp RRA, you'll avoid changes due to consolodation, but you're likely hitting Normalisation.

            If you want the shown values to be the same, then you need to ensure all 3 operations are no-ops.

            1. Use type gauge to stop Rate conversion

            2. Ensure all updates are done exactly on a time boundary to avoid Normalisation. Don't use 'N' (now) in the update, fix it to a timestamp that is a multiple of the Interval (300s).

            3. Have a 1cdp=1pdp RRA of sufficient size to avoid Consolodation.

            For more detail on this, see Alex's famous page here: http://rrdtool.vandenbogaerdt.nl/process.php

            Source https://stackoverflow.com/questions/71374365

            QUESTION

            How to install uwsgi on windows?
            Asked 2022-Feb-22 at 09:41

            I'm trying to install uwsgi for a django project inside a virtual environment; I'm using windows 10.

            I did pip install uwsgi & I gotCommand "python setup.py egg_info".

            So to resolve the error I followed this SO answer

            As per the answer I installed cygwin and gcc compiler for windows following this.

            Also changed the os.uname() to platform.uname()

            And now when I run `python setup.py install``. I get this error

            ...

            ANSWER

            Answered 2022-Feb-16 at 14:29

            Step 1: Download this stable release of uWSGI

            Step 2: Extract the tar file inside the site-packages folder of the virtual environment.

            For example the extracted path to uwsgi should be:

            Source https://stackoverflow.com/questions/71092850

            QUESTION

            Install uWSGI on m1 Monterey fails with python 3.10.0
            Asked 2021-Dec-26 at 20:49

            I installed python via pyenv, and then created virtual environment with command python -m venv .venv

            which python

            Returns: /Users/my_name/Development/my_project/.venv/bin/python

            Then pip install uWSGI==2.0.20 fails with following error:

            ...

            ANSWER

            Answered 2021-Dec-26 at 20:49

            Found solution on github:

            https://github.com/unbit/uwsgi/issues/2361

            LDFLAGS=-L/opt/homebrew/Cellar/gettext/0.21/lib pip install --no-cache-dir "uWSGI==2.0.20"

            Source https://stackoverflow.com/questions/70479867

            QUESTION

            Python and uWSGI: Unhandled object from iterator
            Asked 2021-Dec-13 at 10:39

            Please help me understand using uWSGI. Here my Python file:

            ...

            ANSWER

            Answered 2021-Sep-04 at 17:17

            According to the doc, python3 requires return type of bytes.

            Source https://stackoverflow.com/questions/69057420

            QUESTION

            How to store aggregated data in kdb+
            Asked 2021-Jul-08 at 09:31

            I've faced with an architecture issue: what strategy should I choose to store aggregated data.

            I know that in some Time Series DBs, like RRDTools, it is OK to have several db layers to store 1H,1W,1M,1Y aggregated data.

            Is it a normal practice to use the same strategy for kdb+: to have several HDBs with date/month/year/int(for week and other) partitions? (with a rule on Gateway how to those an appropriate source.)

            As an alternative I have in mind to store all data in a single HDB in tables like tablenameagg. But it looks not so smooth like several HDBs to me.

            What points should I take into account for a decision?

            ...

            ANSWER

            Answered 2021-Jul-08 at 09:31

            It's hard to give a general answer as requirements are different for everyone but I can say in my experience that the normal practice is to have a single date-partitioned HDB as this can accommodate the widest range of historical datasets. In terms of increasing granularity of aggregation:

            1. Full tick data - works best as date-partitioned with `p# on sym
            2. Minutely aggregated data - still works well as date-partitioned with `p# on either sym or minute, `g# on either minute or sym
            3. Hourly aggregated data - could be either date-partitioned or splayed depending on volume. Again you can have some combination of attributes on the sym and/or the aggregated time unit (in this case hour)
            4. Weekly aggregated data - given how much this would compress the data you're now likely looking at a splayed table in this date-partitioned database. Use attributes as above.
            5. Monthly/Yearly aggregated data - certainly could be splayed and possibly even flat given how small these tables would be. Attributes almost unnecessary in the flat case.

            Maintaining many different HDBs with different partition styles would seem like overkill to me. But again it all depends on the situation and the volumes of data involved and in the expected usage pattern of the data.

            Source https://stackoverflow.com/questions/68287876

            QUESTION

            rrdtool graph ignoring --step?
            Asked 2021-Feb-09 at 19:23

            I have RRD Files with multiple months of PDP Data (5min Interval).

            For general purpose Graphs its fine, when rrdtool automatically decides which RRA to use for displaying the Graph.

            But some of my Graphs contain 95-Percentile Data in the legend, which I need to be calculated from "exact" 5min-Interval Data, because calculation of Percentile from aggregated Data-Points can (by it's nature) lead to dramatically incorrect values.'

            • I can fetch Data from RRD File with a step of 300 and I'll get the right data to calculate percentile on my own
            • PROBLEM: When graph'ing with a step of 300, the displayed Percentile value varies depending on the width of the Graph, even if the Time-Range is the same, and 300s Data is available for the whole Time-Range
            • if width for 1-month graph is 800px, the shown Percentile (and also max-values e.g) is wrong
            • if width for 1-month graph is 8000px, the Values are correct (matching the self-calculated values from fetch'ed data)

            graph:

            ...

            ANSWER

            Answered 2021-Feb-09 at 19:23

            This is due to data consolidation being performed prior to the VDEF calculation.

            Although your rrdtool graph arguments specify a step of 300s, this is less width than a pixel of the graph, and so the data series are further averaged before you get to the VDEF. All the CDEF and VDEF functions will always work with a time series of one cdp per pixel. From the RRDTool manual:

            Note: a step smaller than one pixel will silently be ignored.

            This means that, while you can decrease the resolution of the data, you cannot increase it. Sadly, to get an accurate 95th Percentile, you need higher-resolution data.

            So, if you omit the --step 300 in a narrow graph, what will happen is:

            • You ask for a 1-month time window
            • RRDTool calculates 1 pixel is about 1 hour
            • DS retrieves an Average time series from the 1hour RRA, one cdp per pixel (IE hour)
            • VDEF then consolidates this to a 95th percentile
            • The 95th percentile calculation is inaccurate

            With the --step 300 it is slightly different process, but the same result:

            • You ask for a 1-month time window, with step 300
            • RRDTool calculates 1 pixel is about 1 hour
            • RRDTool DS retrieves a month's worth of data from the 300s RRA
            • RRDTool further consolidates this data down to 1cdp per pixel (IE per hour) using Average
            • VDEF then consolidates this to a 95th percentile
            • The 95th percentile calculation is inaccurate

            So, you can see the final outcome is the same - its just where the 300s -> 1h consolidation happens, either in the RRA or at graph time.

            When using a wide graph, the time per pixel becomes smaller, and RRDTool then no longer needs to perform its additional consolidation of the data, resulting in a more accurate calculation:

            • You ask for a 1-month time window
            • RRDTool calculates 1 pixel is about 5 minutes
            • RRDTool DS retrieves a month's worth of data from the 300s RRA
            • No further consolidation is required
            • VDEF then consolidates this to a 95th percentile
            • The 95th percentile calculation is accurate!

            When you retrieve the raw data using rrdtool fetch1 then this extra consolodation doesn't happen, so you get:

            • You ask for a 1-month time window with step 300
            • RRDTool DS retrieves a month's worth of data from the 300s RRA
            • These data are output
            • Your spreadsheet then calculates a 95th percentile
            • The 95th percentile calculation is correct (well, as close as you can be with a 5min interval)

            Your next question will likely be, how do I stop this from happening? The unfortunate answer is that you cannot. RRDTool does not have a Percentile type CF, and so the correct calculations cannot be performed in the RRA (this would be the only real solution).

            The Routers2 frontend for MRTG calculated 95th Percentiles for the graphs, and the way it does it is to perform a high-resolution fetch to get the raw data and calculates the value internally before passing this in a HRULE when making the graph. In other words, it doesn't use a VDEF at all, due to this problem you are experiencing.

            Source https://stackoverflow.com/questions/66121615

            QUESTION

            RRD Tool Graph data is different between weekly and monthly
            Asked 2021-Jan-19 at 00:26

            When I generate a graph for weekly, the value is 4 but when I generate for monthly, the value is about 2.1

            I try to fetch the data and the value is 4. data is the same when fetching with the timestamp for a week/month

            Weekly_Graph

            Monthly_Graph

            My rrd info:

            ...

            ANSWER

            Answered 2021-Jan-19 at 00:26

            This is because of implicit averaging in your graph.

            First, lets leave aside the question of why your RRA 0, 1 and 2 are all of type AVERAGE with 1cdp=1pdp. Your RRA 0 and 1 are totally superfluous. You may want to review how you create your RRD file.

            When you do a rrdtool graph then you're going to be using the 1cdp=1pdp RRA, which is in effect the raw data. However, once the data width gets larger than 1cdp per pixel, RRDTool will implicitly consolodate the data before displaying.

            If you're showing a weekly graph, then you'll end up with 1 pixel per cdp (well actually 672 cdp for 600 pixels, but its close enough), and so you'll be displaying the stored value in the RRA (in this case, 4.0). However, when you do it for a month, you'll have 4 times as many samples but the same number of pixels. Thus, your displayed pixels will in fact be the average of 4 data points (well, 4.48cdp to be precise). This means that, if the data are irregular, it will smooth out the data and you'll get lower peaks, which is what you're seeing (a value of 2.1 in your example).

            When you do a rrdtool fetch then you're pulling the data straight from the RRA without consolodation (unless you explicitly request it). So, in this case, you see the 4.0 again.

            What to do about this depends on what you want to see. Since you have a MAX RRA at the higher resolution, you have a few options -

            • Graph two lines for your monthly graph - an AVERAGE, and a MAX. You may also want to add a MIN RRA and show that line, depending on how spiky tha data are.
            • Use your AVERAGE RRA, but explicitly tell RRDTool (in the DS definition) to use MAX as the secondary consolodation method. This is not so good as it gives a misleadingly high value but it is more what you're expecting to see
            • Keep it this way, and add some text to explain the data have been averages over the graph pixel time interval
            • Make the graph 4x wider so that the number of pixels again matches the number of cdp and additional consolodation is not required

            Source https://stackoverflow.com/questions/65735477

            QUESTION

            Can not display values adequately on the y-axis using rrdtool graph
            Asked 2020-Aug-24 at 05:58

            I use rrdtool as a database for weather data. Everything works fine. Only the output of the air average air pressure (measured in hPa) causes problems with the output as a graph. The air pressure usually ranges between minimally 960 hPa and maximally 1050 hPa. With the option ‘--alt-autoscale’, the fluctuations in the air pressure are displayed, but not the values on the y-axis. If I enter 1050 as ‘--upper-limit’ and 950 as ‘--lower-limit’, values between 0.8 k and 1.2 k hPa appear on the y-axis, but the line with the average values corresponds to a parallel to the x-axis (see image). One can also not display values like ‘1000’ on the Y-axis instead of SI units like ‘1.0 k’. Example of the code used for displaying the pressure values:

            ...

            ANSWER

            Answered 2020-Aug-24 at 05:58

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install rrdtool

            You can download it from GitHub.

            Support

            Contributed feature and bug patches are most welcome. But please send complete patches. A complete patch patches the CODE as well as the CHANGES, CONTRIBUTORS and the POD files. Use GNU diff --unified --recursive olddir newdir to build your patches.
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/octo/rrdtool.git

          • CLI

            gh repo clone octo/rrdtool

          • sshUrl

            git@github.com:octo/rrdtool.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link

            Consider Popular Version Control System Libraries

            husky

            by typicode

            git-lfs

            by git-lfs

            go-git

            by src-d

            FastGithub

            by dotnetcore

            git-imerge

            by mhagger

            Try Top Libraries by octo

            liboping

            by octoC

            statsd-tg

            by octoC

            retry

            by octoGo

            librouteros

            by octoC

            mongoose

            by octoC