hypertable | A flexible database focused on performance | Performance Testing library
kandi X-RAY | hypertable Summary
kandi X-RAY | hypertable Summary
You can either download an appropriate binary package for your platform or build from source. Binary packages can be obtained from [here] See [this wiki page] for getting started with hypertable binary packages.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of hypertable
hypertable Key Features
hypertable Examples and Code Snippets
Community Discussions
Trending Discussions on hypertable
QUESTION
Have hypertable table with a couple million rows. I'm able to select the size of this just fine using the following:
SELECT pg_size_pretty( pg_total_relation_size('towns') );
I also have a continuous aggregate for that hypertable:
...ANSWER
Answered 2021-Aug-04 at 14:29The following SQL can help :)
QUESTION
We have a Prometheus Postgres Exporter set up and expect we can get stats of rows inserted into table
...ANSWER
Answered 2022-Mar-11 at 13:32I'm not sure I understood what do you mean by "all affected tables", but to get all hypertables in a single query, you can cast the hypertable name with ::regclass
. Example from some playground database with a few random hypertables:
QUESTION
Given a hypertable
...ANSWER
Answered 2022-Mar-07 at 12:15You can get this information about retention policies through the jobs view:
QUESTION
For a project I need two types of tables.
- hypertable (which is a special type of table in PostgreSQL (in PostgreSQL TimescaleDB)) for some timeseries records
- my ordinary tables which are not timeseries
Can I create a PostgreSQL TimescaleDB and store my ordinary tables on it? Are all the tables a hypertable (time series) on a PostgreSQL TimescaleDB? If no, does it have some overhead if I store my ordinary tables in PostgreSQL TimescaleDB?
If I can, does it have any benefit if I store my ordinary table on a separate ordinary PostgreSQL database?
...ANSWER
Answered 2022-Jan-03 at 15:10Can I create a PostgreSQL TimescaleDB and store my ordinary tables on it?
Absolutely... TimescaleDB is delivered as an extension to PostgreSQL and one of the biggest benefits is that you can use regular PostgreSQL tables alongside the specialist time-series tables. That includes using regular tables in SQL queries with hypertables. Standard SQL works, plus there are some additional functions that Timescale created using PostgreSQL's extensibility features.
Are all the tables a hypertable (time series) on a PostgreSQL TimescaleDB?
No, you have to explicitly create a table as a hypertable for it to implement TimescaleDB features. It would be worth checking out the how-to guides in the Timescale docs for full (and up to date) details.
If no, does it have some overhead if I store my ordinary tables in PostgreSQL TimescaleDB?
I don't think there's a storage overhead. You might see some performance gains e.g. for data ingest and query performance. This article may help clarify that https://docs.timescale.com/timescaledb/latest/overview/how-does-it-compare/timescaledb-vs-postgres/
Overall you can think of TimescaleDB as providing additional functionality to 'vanilla' PostgreSQL and so unless there's a reason around application design to separate non-time-series data to a separate database then you aren't obliged to do that.
One other point, shared by a very experienced member of our Slack community [thank you Chris]:
To have time-series data and “normal” data (normalized) in one or separate databases for us came down to something like “can we asynchronously replicate the time-series information”? In our case we use two different pg systems, one replicating asynchronously (for TimescaleDB) and one with synchronous replication (for all other data).
Transparency: I work for Timescale
QUESTION
I have next db.
...ANSWER
Answered 2021-Dec-17 at 12:45Foreign key constraints referencing a hypertable are not supported.
I've tried unique constraints, but it didn't worked out also.
So, the only way I fount to solve this problem is delete foregein key in SpnValues
QUESTION
I am writing some scripts that need to determine the last timestamp a timeseries datastream that can be interupted.
I am currently working out the most efficient way to do this, the simplest would be to look for the largest timestamp using MAX. As the tables in question are timescaledb hypertables they are indexed, so in theory it should be a case of following the index to find the largest and this should be very efficient operation. However, I am not sure if this is actually true and was wondering if anyone knew how max scales if it's working down an index, I know it's an O(n) function normally.
...ANSWER
Answered 2021-Dec-15 at 10:27If there is an index on the column, max
can use the index and will become O(1):
QUESTION
I'm new to TimescaleDB and started with exploring the documentation. It's pretty clear and looks like I've missed something important.
I've created a table:
...ANSWER
Answered 2021-Nov-23 at 22:24Typically the "time" column used to create the hypertable is not used as a segment by column while setting up compression. segment_by column is a column that has some commonality across the data set. e.g. If we have a table with device readings (device_id, event_timestamp, event_id, reading) the segment by column could be device_id (say you have a few 1000 devices and the device_readings table has data in the order of millions/billions). Note that the data in the segment by column is never stored in compressed form. Only the non segment by columns get compressed.
QUESTION
I am attempting to execute a raw sql insert statement in Sqlalchemy, SQL Alchemy throws no errors when the constructed insert statement is executed but the lines do not appear in the database.
As far as I can tell, it isn't a syntax error (see no 2), it isn't an engine error as the ORM can execute an equivalent write properly (see no 1), it's finding the table it's supposed to write too (see no 3). I think it's a problem with a transaction not being commited and have attempted to address this (see no 4) but this hasn't solved the issue. Is it possible to create a nested transaction and what would start the 'first' so to speak?
Thankyou for any answers.
Some background:
I know that the ORM facilitates this and have used this feature and it works, but is too slow for our application. We decided to try using raw sql for this particular write function due to how often it's called and the ORM for everything else. An equivalent method using the ORM works perfectly, and the same engine is used for both, so it can't be an engine problem right?
I've issued an example of the SQL that the method using raw sql constructs to the database directly and that reads in fine, so I don't think it's a syntax error.
it's communicating with the database properly and can find the table as any syntax errors with table and column names throw a programmatic error so it's not just throwing stuff into the 'void' so to speak.
My first thought after reading around was that it was transaction error and that a transaction was being created and not closed, and so constructed the execute statement as such to ensure a transaction was properly created and commited.
...
ANSWER
Answered 2021-Oct-27 at 14:46Just so this question is answered in case anyone else ends up here:
The issue was a failure to call commit as a method, as @snakecharmerb pointed out. Gord Thompson also provided an alternate method using 'begin' which automatically commits rather than connection which is a 'commit as you go' style transaction.
QUESTION
I have a hypothetical Hypertable candle
of exchanges and their trading pair data:
ANSWER
Answered 2021-Nov-09 at 18:44What is the best way to handle this situation?
I'd consider adopting smaller chunks, so compress and decompress will become faster operations.
Can I bulk load data to compressed hypertables safely?
Timescale is in progress with the implementation of this ability.
Can the compression condition consider only filtered exchange_id columns and I could compress exchange_id I know to be safe by hand?
Maybe try to add_partition?
QUESTION
I'm trying to convert a timeseries recordset into something a bit more suitable for data analysis. Consider this following contiguous recordset (From, To, Value)
...ANSWER
Answered 2021-Oct-18 at 12:51As I expected, it was a combination of generate_series and window functions. But I didn't expect to have to create my own locf
function, I thought LEAD/LAG
had options to remember the last known / non-null values.
The following code takes a few known records, and unions them against a generated series of timestamps.
I needed to use DISTINCT ON
to purge the generated records that already had known equivalents.
Then I can finally use LEAD
for the "next date" and locf_any
for the value carried forward.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install hypertable
Download the source: You can either download a release source tar ball from the [download page](http://hypertable.org/download.html) and unpack it in your source directory say ~/src: cd ~/src tar zxvf <path_to>/hypertable-<version>-src.tar.gz or from our git repository: cd ~/src git clone git://scm.hypertable.org/pub/repos/hypertable.git From now on, we assume that your hypertable source tree is ~/src/hypertable
Install the development environment: Run the following script to setup up the dev environment: ~/src/hypertable/bin/src-utils/htbuild --install dev_env If it did not work for your platform, check out the [HowToBuild](http://code.google.com/p/hypertable/wiki/HowToBuild) wiki for various tips on building on various platforms. Patches for htbuild to support your platforms are welcome :)
Configure the build: Assuming you want your build tree to be ~/build/hypertable mkdir -p ~/build/hypertable cd ~/build/hypertable cmake ~/src/hypertable By default, hypertable gets installed in /opt/hypertable. To install into your own install directory, say $prefix, you can use: cmake -DCMAKE_INSTALL_PREFIX=$prefix ~/src/hypertable By default the build is configured for debug. To make a release build for production/performance test/benchmark: cmake -DCMAKE_BUILD_TYPE=Release ~/src/hypertable Note, you can also use: ccmake ~/src/hypertable to change build parameters interactively. To build shared libraries, e.g., for scripting language extensions: cmake -DBUILD_SHARED_LIBS=ON ~/src/hypertable Since PHP has no builtin package system, its thrift installation needs to be manually specified for ThriftBroker support: cmake -DPHPTHRIFT_ROOT=~/thrift/lib/php/src ~/src/hypertable
Build Hypertable binaries. make (or make -jnumber_of_cpu_or_cores_plus_1 for faster compile) make install Note, if it is a shared library install, you might need to do: echo $prefix/$version/lib | \ sudo tee /etc/ld.so.conf.d/hypertable sudo /sbin/ldconfig Or, you can use the usual LD_LIBRARY_PATH (most Unix like OS) and DYLD_LIBRARY_PATH (Mac OS X) to specify non-standard shared library directories.
Install the following tools:.
Install the following tools: [doxygen](http://www.stack.nl/~dimitri/doxygen/) [graphviz](http://www.graphviz.org/) Note: if you ran `htbuild --install dev_env`, these should already be installed
If you have doxygen installed on your system, then CMake should detect this and add a "doc" target to the make file. Building the source code documentation tree is just a matter of running the following commands: cd ~/build/hypertable make doc
~/build/hypertable/doc/html/index.html for source code documentation and,
~/build/hypertable/gen-html/index.html for Thrift API reference.
~/build/hypertable/hqldoc/index.html for HQL reference.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page