kandi background
Explore Kits

rocksdb | persistent key-value store | Database library

 by   facebook C++ Version: v7.1.2 License: Non-SPDX

 by   facebook C++ Version: v7.1.2 License: Non-SPDX

Download this library from

kandi X-RAY | rocksdb Summary

rocksdb is a C++ library typically used in Database applications. rocksdb has no bugs, it has no vulnerabilities and it has medium support. However rocksdb has a Non-SPDX License. You can download it from GitHub.
RocksDB is developed and maintained by Facebook Database Engineering Team. It is built on earlier work on LevelDB by Sanjay Ghemawat (sanjay@google.com) and Jeff Dean (jeff@google.com). This code is a library that forms the core building block for a fast key-value server, especially suited for storing data on flash drives. It has a Log-Structured-Merge-Database (LSM) design with flexible tradeoffs between Write-Amplification-Factor (WAF), Read-Amplification-Factor (RAF) and Space-Amplification-Factor (SAF). It has multi-threaded compactions, making it especially suitable for storing multiple terabytes of data in a single database. Start with example usage here: https://github.com/facebook/rocksdb/tree/main/examples. See the github wiki for more explanation. The public interface is in include/. Callers should not include or rely on the details of any other header files in this package. Those internal APIs may be changed without warning. Questions and discussions are welcome on the RocksDB Developers Public Facebook group and email list on Google Groups.
Support
Support
Quality
Quality
Security
Security
License
License
Reuse
Reuse

kandi-support Support

  • rocksdb has a medium active ecosystem.
  • It has 22331 star(s) with 5096 fork(s). There are 1003 watchers for this library.
  • There were 2 major release(s) in the last 6 months.
  • There are 411 open issues and 2076 have been closed. On average issues are closed in 150 days. There are 254 open pull requests and 0 closed requests.
  • It has a neutral sentiment in the developer community.
  • The latest version of rocksdb is v7.1.2
rocksdb Support
Best in #Database
Average in #Database
rocksdb Support
Best in #Database
Average in #Database

quality kandi Quality

  • rocksdb has 0 bugs and 0 code smells.
rocksdb Quality
Best in #Database
Average in #Database
rocksdb Quality
Best in #Database
Average in #Database

securitySecurity

  • rocksdb has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
  • rocksdb code analysis shows 0 unresolved vulnerabilities.
  • There are 0 security hotspots that need review.
rocksdb Security
Best in #Database
Average in #Database
rocksdb Security
Best in #Database
Average in #Database

license License

  • rocksdb has a Non-SPDX License.
  • Non-SPDX licenses can be open source with a non SPDX compliant license, or non open source licenses, and you need to review them closely before use.
rocksdb License
Best in #Database
Average in #Database
rocksdb License
Best in #Database
Average in #Database

buildReuse

  • rocksdb releases are available to install and integrate.
  • It has 47407 lines of code, 5947 functions and 407 files.
  • It has medium code complexity. Code complexity directly impacts maintainability of the code.
rocksdb Reuse
Best in #Database
Average in #Database
rocksdb Reuse
Best in #Database
Average in #Database
Top functions reviewed by kandi - BETA

Coming Soon for all Libraries!

Currently covering the most popular Java, JavaScript and Python libraries. See a SAMPLE HERE.
kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.

rocksdb Key Features

A library that provides an embeddable, persistent key-value store for fast storage.

The Kafka topic is here, a Java consumer program finds it, but lists none of its content, while a kafka-console-consumer is able to

copy iconCopydownload iconDownload
    LOGGER.info("L'extracteur de données Garmin démarre...");

    /* Les données du fichier CSV d'entrée sont sous cette forme :

     Durée,Poids,Variation,IMC,Masse grasse,Masse musculaire squelettique,Masse osseuse,Masse hydrique,
     " 14 Fév. 2022",
     06:37,72.1 kg,0.3 kg,22.8,26.3 %,29.7 kg,3.5 kg,53.8 %,
     " 13 Fév. 2022",
     06:48,72.4 kg,0.2 kg,22.9,25.4 %,29.8 kg,3.6 kg,54.4 %,
   */

    // Création d'un flux sans clef et valeur : chaîne de caractères.
    StreamsBuilder builder = new StreamsBuilder();
    builder.stream("poids_garmin_brut")
            .foreach((k, v) -> {
                LOGGER.info(v.toString());
            });

    KafkaStreams streams = new KafkaStreams(builder.build(), config());
    streams.start();

    // Fermer le flux Kafka quand la VM s'arrêtera, en faisant appeler
    //streams.close();
    Runtime.getRuntime().addShutdownHook(new Thread(streams::close));
2022-02-15 20:05:54 INFO  ConsumerCoordinator:291 - [Consumer clientId=dev1-5e3fab76-51c7-41b5-aedf-99a4a071589b-StreamThread-1-consumer, groupId=dev1] Adding newly assigned partitions: poids_garmin_brut-0
2022-02-15 20:05:54 INFO  StreamThread:229 - stream-thread [dev1-5e3fab76-51c7-41b5-aedf-99a4a071589b-StreamThread-1] State transition from STARTING to PARTITIONS_ASSIGNED
2022-02-15 20:05:54 INFO  ConsumerCoordinator:844 - [Consumer clientId=dev1-5e3fab76-51c7-41b5-aedf-99a4a071589b-StreamThread-1-consumer, groupId=dev1] Setting offset for partition poids_garmin_brut-0 to the committed offset FetchPosition{offset=21, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[LAPTOP-J1JBHQUR:9092 (id: 0 rack: null)], epoch=0}}
2022-02-15 20:05:54 INFO  StreamTask:240 - stream-thread [dev1-5e3fab76-51c7-41b5-aedf-99a4a071589b-StreamThread-1] task [0_0] Initialized
2022-02-15 20:05:54 INFO  StreamTask:265 - stream-thread [dev1-5e3fab76-51c7-41b5-aedf-99a4a071589b-StreamThread-1] task [0_0] Restored and ready to run
2022-02-15 20:05:54 INFO  StreamThread:882 - stream-thread [dev1-5e3fab76-51c7-41b5-aedf-99a4a071589b-StreamThread-1] Restoration took 30 ms for all tasks [0_0]
2022-02-15 20:05:54 INFO  StreamThread:229 - stream-thread [dev1-5e3fab76-51c7-41b5-aedf-99a4a071589b-StreamThread-1] State transition from PARTITIONS_ASSIGNED to RUNNING
2022-02-15 20:05:54 INFO  KafkaStreams:332 - stream-client [dev1-5e3fab76-51c7-41b5-aedf-99a4a071589b] State transition from REBALANCING to RUNNING
2022-02-15 20:05:54 INFO  KafkaConsumer:2254 - [Consumer clientId=dev1-5e3fab76-51c7-41b5-aedf-99a4a071589b-StreamThread-1-consumer, groupId=dev1] Requesting the log end offset for poids_garmin_brut-0 in order to compute lag
2022-02-15 20:06:03 INFO  Main:33 - Test22
2022-02-15 20:06:06 INFO  Main:33 - Test23
-----------------------
    LOGGER.info("L'extracteur de données Garmin démarre...");

    /* Les données du fichier CSV d'entrée sont sous cette forme :

     Durée,Poids,Variation,IMC,Masse grasse,Masse musculaire squelettique,Masse osseuse,Masse hydrique,
     " 14 Fév. 2022",
     06:37,72.1 kg,0.3 kg,22.8,26.3 %,29.7 kg,3.5 kg,53.8 %,
     " 13 Fév. 2022",
     06:48,72.4 kg,0.2 kg,22.9,25.4 %,29.8 kg,3.6 kg,54.4 %,
   */

    // Création d'un flux sans clef et valeur : chaîne de caractères.
    StreamsBuilder builder = new StreamsBuilder();
    builder.stream("poids_garmin_brut")
            .foreach((k, v) -> {
                LOGGER.info(v.toString());
            });

    KafkaStreams streams = new KafkaStreams(builder.build(), config());
    streams.start();

    // Fermer le flux Kafka quand la VM s'arrêtera, en faisant appeler
    //streams.close();
    Runtime.getRuntime().addShutdownHook(new Thread(streams::close));
2022-02-15 20:05:54 INFO  ConsumerCoordinator:291 - [Consumer clientId=dev1-5e3fab76-51c7-41b5-aedf-99a4a071589b-StreamThread-1-consumer, groupId=dev1] Adding newly assigned partitions: poids_garmin_brut-0
2022-02-15 20:05:54 INFO  StreamThread:229 - stream-thread [dev1-5e3fab76-51c7-41b5-aedf-99a4a071589b-StreamThread-1] State transition from STARTING to PARTITIONS_ASSIGNED
2022-02-15 20:05:54 INFO  ConsumerCoordinator:844 - [Consumer clientId=dev1-5e3fab76-51c7-41b5-aedf-99a4a071589b-StreamThread-1-consumer, groupId=dev1] Setting offset for partition poids_garmin_brut-0 to the committed offset FetchPosition{offset=21, offsetEpoch=Optional.empty, currentLeader=LeaderAndEpoch{leader=Optional[LAPTOP-J1JBHQUR:9092 (id: 0 rack: null)], epoch=0}}
2022-02-15 20:05:54 INFO  StreamTask:240 - stream-thread [dev1-5e3fab76-51c7-41b5-aedf-99a4a071589b-StreamThread-1] task [0_0] Initialized
2022-02-15 20:05:54 INFO  StreamTask:265 - stream-thread [dev1-5e3fab76-51c7-41b5-aedf-99a4a071589b-StreamThread-1] task [0_0] Restored and ready to run
2022-02-15 20:05:54 INFO  StreamThread:882 - stream-thread [dev1-5e3fab76-51c7-41b5-aedf-99a4a071589b-StreamThread-1] Restoration took 30 ms for all tasks [0_0]
2022-02-15 20:05:54 INFO  StreamThread:229 - stream-thread [dev1-5e3fab76-51c7-41b5-aedf-99a4a071589b-StreamThread-1] State transition from PARTITIONS_ASSIGNED to RUNNING
2022-02-15 20:05:54 INFO  KafkaStreams:332 - stream-client [dev1-5e3fab76-51c7-41b5-aedf-99a4a071589b] State transition from REBALANCING to RUNNING
2022-02-15 20:05:54 INFO  KafkaConsumer:2254 - [Consumer clientId=dev1-5e3fab76-51c7-41b5-aedf-99a4a071589b-StreamThread-1-consumer, groupId=dev1] Requesting the log end offset for poids_garmin_brut-0 in order to compute lag
2022-02-15 20:06:03 INFO  Main:33 - Test22
2022-02-15 20:06:06 INFO  Main:33 - Test23
-----------------------
streams.start();
streams.close();
config.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");
-----------------------
streams.start();
streams.close();
config.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest");

How to replace RocksDB by in-memory db just for integration tests?

copy iconCopydownload iconDownload
    final var storeBuilder = Stores.windowStoreBuilder(
//        Stores.persistentWindowStore(storeName, Duration.ofMinutes(10), Duration.ofMinutes(1),
//            false),
        Stores.inMemoryWindowStore(storeName, Duration.ofMinutes(10), Duration.ofMinutes(1),
            false),
        Serdes.String(),
        Serdes.String()
    );
    builder.addStateStore(storeBuilder);
-----------------------
@Profile("prod || stage || test")
@Configuration
class PersistentStoreConfiguration {
    @Bean
    fun projektanhangStoreSupplier(): KeyValueBytesStoreSupplier = Stores.persistentKeyValueStore(ProjektanhangStore.NAME)
}

@Profile( "it || dev")
@Configuration
class ProjektInMemoryStoreConfiguration {
    @Bean
    fun projektanhangStoreSupplier(): KeyValueBytesStoreSupplier = Stores.inMemoryKeyValueStore(ProjektanhangStore.NAME)
}
@Configuration
class ProjektAnhangStreamConfiguration {
    @Inject
    private lateinit var projektanhangStoreSupplier: KeyValueBytesStoreSupplier

    @Bean
    fun projektanhaenge() = Consumer<KStream<String, AnhangEvent>> {
        it.map { _, v -> KeyValue(v.anhang.projektId, v) }
            .groupByKey(Grouped.with(Serdes.StringSerde(), JsonSerde(AnhangEvent::class.java)))
            .aggregate(
                { ProjektanhangAggregator() },
                { _, anhangEvent, aggregator ->
                    when (anhangEvent.action) {
                        CREATE -> aggregator.add(anhangEvent.anhang)
                        DELETE -> aggregator.remove(anhangEvent.anhang)
                        UPDATE -> aggregator.update(anhangEvent.anhang)
                    }
                },
                Materialized
                    .`as`<String, ProjektanhangAggregator>(projektanhangStoreSupplier)
                    .withKeySerde(Serdes.String())
                    .withValueSerde(JsonSerde(ProjektanhangAggregator::class.java))
            )
    }
}
-----------------------
@Profile("prod || stage || test")
@Configuration
class PersistentStoreConfiguration {
    @Bean
    fun projektanhangStoreSupplier(): KeyValueBytesStoreSupplier = Stores.persistentKeyValueStore(ProjektanhangStore.NAME)
}

@Profile( "it || dev")
@Configuration
class ProjektInMemoryStoreConfiguration {
    @Bean
    fun projektanhangStoreSupplier(): KeyValueBytesStoreSupplier = Stores.inMemoryKeyValueStore(ProjektanhangStore.NAME)
}
@Configuration
class ProjektAnhangStreamConfiguration {
    @Inject
    private lateinit var projektanhangStoreSupplier: KeyValueBytesStoreSupplier

    @Bean
    fun projektanhaenge() = Consumer<KStream<String, AnhangEvent>> {
        it.map { _, v -> KeyValue(v.anhang.projektId, v) }
            .groupByKey(Grouped.with(Serdes.StringSerde(), JsonSerde(AnhangEvent::class.java)))
            .aggregate(
                { ProjektanhangAggregator() },
                { _, anhangEvent, aggregator ->
                    when (anhangEvent.action) {
                        CREATE -> aggregator.add(anhangEvent.anhang)
                        DELETE -> aggregator.remove(anhangEvent.anhang)
                        UPDATE -> aggregator.update(anhangEvent.anhang)
                    }
                },
                Materialized
                    .`as`<String, ProjektanhangAggregator>(projektanhangStoreSupplier)
                    .withKeySerde(Serdes.String())
                    .withValueSerde(JsonSerde(ProjektanhangAggregator::class.java))
            )
    }
}

Azure Flink checkpointing to Azure Storage: No credentials found for account

copy iconCopydownload iconDownload
execution.checkpointing.interval: 10s
execution.checkpoint.mode: EXACTLY_ONCE
state.backend: rocksdb
state.checkpoints.dir: wasbs://<container>@<storage-account.blob.core.windows.net/checkpoint/

# azure storage access key
fs.azure.account.key.psbombb.blob.core.windows.net: <access-key>
./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<cluster-name> -Dkubernetes.namespace=<your-namespace> -Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-azure-fs-hadoop-1.14.0.jar -Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-azure-fs-hadoop-1.14.0.jar
./bin/flink run --target kubernetes-session -Dkubernetes.namespace=<your-namespace> -Dkubernetes.cluster-id=<cluster-name> ~/path/to/project/<your-jar>.jar
-----------------------
execution.checkpointing.interval: 10s
execution.checkpoint.mode: EXACTLY_ONCE
state.backend: rocksdb
state.checkpoints.dir: wasbs://<container>@<storage-account.blob.core.windows.net/checkpoint/

# azure storage access key
fs.azure.account.key.psbombb.blob.core.windows.net: <access-key>
./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<cluster-name> -Dkubernetes.namespace=<your-namespace> -Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-azure-fs-hadoop-1.14.0.jar -Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-azure-fs-hadoop-1.14.0.jar
./bin/flink run --target kubernetes-session -Dkubernetes.namespace=<your-namespace> -Dkubernetes.cluster-id=<cluster-name> ~/path/to/project/<your-jar>.jar
-----------------------
execution.checkpointing.interval: 10s
execution.checkpoint.mode: EXACTLY_ONCE
state.backend: rocksdb
state.checkpoints.dir: wasbs://<container>@<storage-account.blob.core.windows.net/checkpoint/

# azure storage access key
fs.azure.account.key.psbombb.blob.core.windows.net: <access-key>
./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<cluster-name> -Dkubernetes.namespace=<your-namespace> -Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-azure-fs-hadoop-1.14.0.jar -Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-azure-fs-hadoop-1.14.0.jar
./bin/flink run --target kubernetes-session -Dkubernetes.namespace=<your-namespace> -Dkubernetes.cluster-id=<cluster-name> ~/path/to/project/<your-jar>.jar

View Size of Flink State Maintained By Each Operator

copy iconCopydownload iconDownload
/jobs/:jobid/checkpoints/details/:checkpointid/subtasks/:vertexid

Using unique_ptr with an interface requiring pointer-pointer, to an abstract class

copy iconCopydownload iconDownload
std::unique_ptr<rocksdb::DB> _db; // Class member
...
rocksdb::DB* db;
const rocksdb::Status status = rocksdb::DB::Open(options, fileFullPath, &db);
_db = std::unique_ptr<rocksdb::DB>(db);
std::unique_ptr<rocksdb::DB> _db; // Class member
...
rocksdb::DB* db;
const rocksdb::Status status = rocksdb::DB::Open(options, fileFullPath, &db);
_db.reset(db);
-----------------------
std::unique_ptr<rocksdb::DB> _db; // Class member
...
rocksdb::DB* db;
const rocksdb::Status status = rocksdb::DB::Open(options, fileFullPath, &db);
_db = std::unique_ptr<rocksdb::DB>(db);
std::unique_ptr<rocksdb::DB> _db; // Class member
...
rocksdb::DB* db;
const rocksdb::Status status = rocksdb::DB::Open(options, fileFullPath, &db);
_db.reset(db);

How do I specify a Range including everything in a RocksDB column?

copy iconCopydownload iconDownload
def strinc(key):
    key = key.rstrip(b"\xff")
    return key[:-1] + int2byte(ord(key[-1:]) + 1)

python-rocksdb installation -llz4 missing inside /usr/bin/ld

copy iconCopydownload iconDownload
sudo apt-get install liblz4-dev

Want to put binary data of images into RocksDB in C++

copy iconCopydownload iconDownload
char* buffer = ...
db->Put(WriteOptions(), file_key, buffer);

Is it possible to restore a Kafka Streams state store after a restart without using changelog topics?

copy iconCopydownload iconDownload
// tell Kafka Streams to optimize the topology
config.setProperty(StreamsConfig.TOPOLOGY_OPTIMIZATION, StreamsConfig.OPTIMIZE);

// Since we've configured Streams to use optimizations, the topology is optimized during the build.
// And because optimizations are enabled, the resulting topology will no longer need to perform
// three explicit repartitioning steps, but only one.
final Topology topology = builder.build(config);
final KafkaStreams streams = new KafkaStreams(topology, config);

cstdlib not able to resolve using ::wcstombs

copy iconCopydownload iconDownload
diff --git a/stdlib.h b/stdlib.h
index f255e4a..d88ef89 100644
--- a/stdlib.h
+++ b/stdlib.h
@@ -931,10 +931,11 @@ extern int wctomb (char *__s, wchar_t __wchar) __THROW;
 
 /* Convert a multibyte string to a wide char string.  */
 extern size_t mbstowcs (wchar_t *__restrict  __pwcs,
-                       const char *__restrict __s, size_t __n) __THROW
+                       const char *__restrict __s, size_t __n) __THROW;
 /* Convert a wide char string to multibyte string.  */
 extern size_t wcstombs (char *__restrict __s,
-                       const wchar_t *__restrict __pwcs, size_t __n) __THROW
+                       const wchar_t *__restrict __pwcs, size_t __n)
+     __THROW;
 
 #ifdef __USE_MISC
 /* Determine whether the string value of RESPONSE matches the affirmation
@@ -988,7 +989,7 @@ extern char *ptsname (int __fd) __THROW __wur;
    terminal associated with the master FD is open on in BUF.
    Return 0 on success, otherwise an error number.  */
 extern int ptsname_r (int __fd, char *__buf, size_t __buflen)
-     __THROW __nonnull ((2))
+     __THROW __nonnull ((2));

Community Discussions

Trending Discussions on rocksdb
  • Kafka Streams RocksDB large state
  • The Kafka topic is here, a Java consumer program finds it, but lists none of its content, while a kafka-console-consumer is able to
  • Flink job requiring a lot of memory despite using rocksdb state backend
  • Flink state using RocksDB
  • How can I solve busy time problem in process function?
  • How does Flink save state in checkpoint/savepoint if some of state descriptor is removed
  • How to run Faust from Docker - ERROR: Failed building wheel for python-rocksdb
  • What would happen if a key is not seen but rocksdb has state about that key?
  • Flink StateFun high availability exception: &quot;java.lang.IllegalStateException: There is no operator for the state .....&quot;
  • Where does Flink store Timers and State ttl?
Trending Discussions on rocksdb

QUESTION

Kafka Streams RocksDB large state

Asked 2022-Apr-03 at 20:15

Is it okay to hold large state in RocksDB when using Kafka Streams? We are planning to use RocksDB as an eventstore to hold billions of events for ininite of time.

ANSWER

Answered 2022-Apr-03 at 20:15

The main limitation would be disk space, so sure, it can be done, but if the app crashes for any reason, you might be waiting for a while for the app to rebuild its state.

Source https://stackoverflow.com/questions/71728337

Community Discussions, Code Snippets contain sources that include Stack Exchange Network

Vulnerabilities

No vulnerabilities reported

Install rocksdb

You can download it from GitHub.

Support

For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .

DOWNLOAD this Library from

Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases

Save this library and start creating your kit

Explore Related Topics

Share this Page

share link
Compare Database Libraries with Highest Support
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases

Save this library and start creating your kit

  • © 2022 Open Weaver Inc.