hypertable | A flexible database focused on performance | Performance Testing library

 by   vicaya C++ Version: Current License: GPL-2.0

kandi X-RAY | hypertable Summary

kandi X-RAY | hypertable Summary

hypertable is a C++ library typically used in Testing, Performance Testing, PostgresSQL applications. hypertable has no bugs, it has no vulnerabilities, it has a Strong Copyleft License and it has low support. You can download it from GitHub.

You can either download an appropriate binary package for your platform or build from source. Binary packages can be obtained from [here] See [this wiki page] for getting started with hypertable binary packages.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              hypertable has a low active ecosystem.
              It has 108 star(s) with 127 fork(s). There are 9 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              hypertable has no issues reported. There are 1 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of hypertable is current.

            kandi-Quality Quality

              hypertable has 0 bugs and 0 code smells.

            kandi-Security Security

              hypertable has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              hypertable code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              hypertable is licensed under the GPL-2.0 License. This license is Strong Copyleft.
              Strong Copyleft licenses enforce sharing, and you can use them when creating open source projects.

            kandi-Reuse Reuse

              hypertable releases are not available. You will need to build from source code and install.
              Installation instructions, examples and code snippets are available.
              It has 35266 lines of code, 3061 functions and 123 files.
              It has high code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of hypertable
            Get all kandi verified functions for this library.

            hypertable Key Features

            No Key Features are available at this moment for hypertable.

            hypertable Examples and Code Snippets

            No Code Snippets are available at this moment for hypertable.

            Community Discussions

            QUESTION

            Finding out the size of a continuous aggregate
            Asked 2022-Mar-15 at 06:54

            Have hypertable table with a couple million rows. I'm able to select the size of this just fine using the following:

            SELECT pg_size_pretty( pg_total_relation_size('towns') );

            I also have a continuous aggregate for that hypertable:

            ...

            ANSWER

            Answered 2021-Aug-04 at 14:29

            The following SQL can help :)

            Source https://stackoverflow.com/questions/68652129

            QUESTION

            How to get pg_stat_user_tables n_tup_ins for timescale's compressed table?
            Asked 2022-Mar-11 at 13:32

            We have a Prometheus Postgres Exporter set up and expect we can get stats of rows inserted into table

            ...

            ANSWER

            Answered 2022-Mar-11 at 13:32

            I'm not sure I understood what do you mean by "all affected tables", but to get all hypertables in a single query, you can cast the hypertable name with ::regclass. Example from some playground database with a few random hypertables:

            Source https://stackoverflow.com/questions/71410334

            QUESTION

            TimescaleDB - get retention policy and chunk_time_interval for a table
            Asked 2022-Mar-07 at 12:15

            Given a hypertable

            ...

            ANSWER

            Answered 2022-Mar-07 at 12:15

            You can get this information about retention policies through the jobs view:

            Source https://stackoverflow.com/questions/71354458

            QUESTION

            Have an ordinary table on a PostgreSQL TimescaleDB (timeseries) database
            Asked 2022-Jan-03 at 15:10

            For a project I need two types of tables.

            1. hypertable (which is a special type of table in PostgreSQL (in PostgreSQL TimescaleDB)) for some timeseries records
            2. my ordinary tables which are not timeseries

            Can I create a PostgreSQL TimescaleDB and store my ordinary tables on it? Are all the tables a hypertable (time series) on a PostgreSQL TimescaleDB? If no, does it have some overhead if I store my ordinary tables in PostgreSQL TimescaleDB?

            If I can, does it have any benefit if I store my ordinary table on a separate ordinary PostgreSQL database?

            ...

            ANSWER

            Answered 2022-Jan-03 at 15:10

            Can I create a PostgreSQL TimescaleDB and store my ordinary tables on it?

            Absolutely... TimescaleDB is delivered as an extension to PostgreSQL and one of the biggest benefits is that you can use regular PostgreSQL tables alongside the specialist time-series tables. That includes using regular tables in SQL queries with hypertables. Standard SQL works, plus there are some additional functions that Timescale created using PostgreSQL's extensibility features.

            Are all the tables a hypertable (time series) on a PostgreSQL TimescaleDB?

            No, you have to explicitly create a table as a hypertable for it to implement TimescaleDB features. It would be worth checking out the how-to guides in the Timescale docs for full (and up to date) details.

            If no, does it have some overhead if I store my ordinary tables in PostgreSQL TimescaleDB?

            I don't think there's a storage overhead. You might see some performance gains e.g. for data ingest and query performance. This article may help clarify that https://docs.timescale.com/timescaledb/latest/overview/how-does-it-compare/timescaledb-vs-postgres/

            Overall you can think of TimescaleDB as providing additional functionality to 'vanilla' PostgreSQL and so unless there's a reason around application design to separate non-time-series data to a separate database then you aren't obliged to do that.

            One other point, shared by a very experienced member of our Slack community [thank you Chris]:

            To have time-series data and “normal” data (normalized) in one or separate databases for us came down to something like “can we asynchronously replicate the time-series information”? In our case we use two different pg systems, one replicating asynchronously (for TimescaleDB) and one with synchronous replication (for all other data).

            Transparency: I work for Timescale

            Source https://stackoverflow.com/questions/70563264

            QUESTION

            Key is not present in table, but it is | Postgresql, timescaledb
            Asked 2021-Dec-17 at 12:45

            I have next db.

            ...

            ANSWER

            Answered 2021-Dec-17 at 12:45

            Foreign key constraints referencing a hypertable are not supported.

            I've tried unique constraints, but it didn't worked out also.

            So, the only way I fount to solve this problem is delete foregein key in SpnValues

            Source https://stackoverflow.com/questions/70363501

            QUESTION

            Big O notation of Postgresql Max with timescaledb index
            Asked 2021-Dec-15 at 10:27

            I am writing some scripts that need to determine the last timestamp a timeseries datastream that can be interupted.

            I am currently working out the most efficient way to do this, the simplest would be to look for the largest timestamp using MAX. As the tables in question are timescaledb hypertables they are indexed, so in theory it should be a case of following the index to find the largest and this should be very efficient operation. However, I am not sure if this is actually true and was wondering if anyone knew how max scales if it's working down an index, I know it's an O(n) function normally.

            ...

            ANSWER

            Answered 2021-Dec-15 at 10:27

            If there is an index on the column, max can use the index and will become O(1):

            Source https://stackoverflow.com/questions/70361963

            QUESTION

            Compressed chunks: performance and data size
            Asked 2021-Nov-23 at 22:24

            I'm new to TimescaleDB and started with exploring the documentation. It's pretty clear and looks like I've missed something important.

            I've created a table:

            ...

            ANSWER

            Answered 2021-Nov-23 at 22:24

            Typically the "time" column used to create the hypertable is not used as a segment by column while setting up compression. segment_by column is a column that has some commonality across the data set. e.g. If we have a table with device readings (device_id, event_timestamp, event_id, reading) the segment by column could be device_id (say you have a few 1000 devices and the device_readings table has data in the order of millions/billions). Note that the data in the segment by column is never stored in compressed form. Only the non segment by columns get compressed.

            Source https://stackoverflow.com/questions/70084656

            QUESTION

            Sql Alchemy Insert Statement failing to insert, but no error
            Asked 2021-Nov-17 at 03:49

            I am attempting to execute a raw sql insert statement in Sqlalchemy, SQL Alchemy throws no errors when the constructed insert statement is executed but the lines do not appear in the database.

            As far as I can tell, it isn't a syntax error (see no 2), it isn't an engine error as the ORM can execute an equivalent write properly (see no 1), it's finding the table it's supposed to write too (see no 3). I think it's a problem with a transaction not being commited and have attempted to address this (see no 4) but this hasn't solved the issue. Is it possible to create a nested transaction and what would start the 'first' so to speak?

            Thankyou for any answers.

            Some background:

            1. I know that the ORM facilitates this and have used this feature and it works, but is too slow for our application. We decided to try using raw sql for this particular write function due to how often it's called and the ORM for everything else. An equivalent method using the ORM works perfectly, and the same engine is used for both, so it can't be an engine problem right?

            2. I've issued an example of the SQL that the method using raw sql constructs to the database directly and that reads in fine, so I don't think it's a syntax error.

            3. it's communicating with the database properly and can find the table as any syntax errors with table and column names throw a programmatic error so it's not just throwing stuff into the 'void' so to speak.

            4. My first thought after reading around was that it was transaction error and that a transaction was being created and not closed, and so constructed the execute statement as such to ensure a transaction was properly created and commited.

              ...

            ANSWER

            Answered 2021-Oct-27 at 14:46

            Just so this question is answered in case anyone else ends up here:

            The issue was a failure to call commit as a method, as @snakecharmerb pointed out. Gord Thompson also provided an alternate method using 'begin' which automatically commits rather than connection which is a 'commit as you go' style transaction.

            Source https://stackoverflow.com/questions/69646771

            QUESTION

            Backfilling keyed old data to compressed hypertable
            Asked 2021-Nov-09 at 18:59

            I have a hypothetical Hypertable candle of exchanges and their trading pair data:

            ...

            ANSWER

            Answered 2021-Nov-09 at 18:44

            What is the best way to handle this situation?

            I'd consider adopting smaller chunks, so compress and decompress will become faster operations.

            Can I bulk load data to compressed hypertables safely?

            Timescale is in progress with the implementation of this ability.

            Can the compression condition consider only filtered exchange_id columns and I could compress exchange_id I know to be safe by hand?

            Maybe try to add_partition?

            Source https://stackoverflow.com/questions/69902173

            QUESTION

            Explode (duplicate) records based on timestamp range (PostgreSQL)
            Asked 2021-Oct-18 at 12:51

            I'm trying to convert a timeseries recordset into something a bit more suitable for data analysis. Consider this following contiguous recordset (From, To, Value)

            ...

            ANSWER

            Answered 2021-Oct-18 at 12:51

            As I expected, it was a combination of generate_series and window functions. But I didn't expect to have to create my own locf function, I thought LEAD/LAG had options to remember the last known / non-null values.

            The following code takes a few known records, and unions them against a generated series of timestamps.

            I needed to use DISTINCT ON to purge the generated records that already had known equivalents.

            Then I can finally use LEAD for the "next date" and locf_any for the value carried forward.

            Source https://stackoverflow.com/questions/69548377

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install hypertable

            Install the development environment:.
            Download the source: You can either download a release source tar ball from the [download page](http://hypertable.org/download.html) and unpack it in your source directory say ~/src: cd ~/src tar zxvf <path_to>/hypertable-<version>-src.tar.gz or from our git repository: cd ~/src git clone git://scm.hypertable.org/pub/repos/hypertable.git From now on, we assume that your hypertable source tree is ~/src/hypertable
            Install the development environment: Run the following script to setup up the dev environment: ~/src/hypertable/bin/src-utils/htbuild --install dev_env If it did not work for your platform, check out the [HowToBuild](http://code.google.com/p/hypertable/wiki/HowToBuild) wiki for various tips on building on various platforms. Patches for htbuild to support your platforms are welcome :)
            Configure the build: Assuming you want your build tree to be ~/build/hypertable mkdir -p ~/build/hypertable cd ~/build/hypertable cmake ~/src/hypertable By default, hypertable gets installed in /opt/hypertable. To install into your own install directory, say $prefix, you can use: cmake -DCMAKE_INSTALL_PREFIX=$prefix ~/src/hypertable By default the build is configured for debug. To make a release build for production/performance test/benchmark: cmake -DCMAKE_BUILD_TYPE=Release ~/src/hypertable Note, you can also use: ccmake ~/src/hypertable to change build parameters interactively. To build shared libraries, e.g., for scripting language extensions: cmake -DBUILD_SHARED_LIBS=ON ~/src/hypertable Since PHP has no builtin package system, its thrift installation needs to be manually specified for ThriftBroker support: cmake -DPHPTHRIFT_ROOT=~/thrift/lib/php/src ~/src/hypertable
            Build Hypertable binaries. make (or make -jnumber_of_cpu_or_cores_plus_1 for faster compile) make install Note, if it is a shared library install, you might need to do: echo $prefix/$version/lib | \ sudo tee /etc/ld.so.conf.d/hypertable sudo /sbin/ldconfig Or, you can use the usual LD_LIBRARY_PATH (most Unix like OS) and DYLD_LIBRARY_PATH (Mac OS X) to specify non-standard shared library directories.
            Install the following tools:.
            Install the following tools: [doxygen](http://www.stack.nl/~dimitri/doxygen/) [graphviz](http://www.graphviz.org/) Note: if you ran `htbuild --install dev_env`, these should already be installed
            If you have doxygen installed on your system, then CMake should detect this and add a "doc" target to the make file. Building the source code documentation tree is just a matter of running the following commands: cd ~/build/hypertable make doc
            ~/build/hypertable/doc/html/index.html for source code documentation and,
            ~/build/hypertable/gen-html/index.html for Thrift API reference.
            ~/build/hypertable/hqldoc/index.html for HQL reference.

            Support

            Install the following tools:.
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/vicaya/hypertable.git

          • CLI

            gh repo clone vicaya/hypertable

          • sshUrl

            git@github.com:vicaya/hypertable.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link