riak | Riak is a decentralized datastore from Basho Technologies
kandi X-RAY | riak Summary
kandi X-RAY | riak Summary
Riak is a decentralized datastore from Basho Technologies.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of riak
riak Key Features
riak Examples and Code Snippets
Community Discussions
Trending Discussions on riak
QUESTION
I am trying to build a docker image with a PHP application in it.
This application installs some dependencies via composer.json and, after composer install, needs some customizations done (eg some files must be copied from vendor folder into other locations and so on).
So I have written these steps as bash commands and putted in the composer.json post-install-cmd section.
This is my composer.json (I've omitted details, but the structure is the same):
...ANSWER
Answered 2022-Jan-21 at 09:22Please have a look at the documentation of Composer scripts. It explains pretty obvious:
post-install-cmd: occurs after the install command has been executed with a lock file present.
If you are using composer install
with a lock file not present (as indicated from the console output), this event is not fired.
QUESTION
What's the best way to model this data:
...ANSWER
Answered 2021-Mar-23 at 13:50From the above, I can imagine this being implemented either via maps, possibly containing sets with map reduce/secondary indexes or by using Yokozuna (Solr).
Despite this, the above data set looks more like it should be in a relational database than a non-SQL database. For something in the middle, you might want to consider Riak TS (download) which is similar to KV but has a thin SQL layer on top. The current version (1.5.1) is a bit old but we plan to launch an improved version later this year with additional SQL features available.
QUESTION
The current Riak 2.0 API precedes buckets with types, thus it is quite hard to use with a Multitenant environment, meaning multiple applications accessing the same database, other databases allow this through Namespaces, e.g. is the Google App Engine Datastore, where each namespace are totally isolated from each other.
What could be the best work-around to have a multi-tenant Riak KV?
...ANSWER
Answered 2021-Mar-23 at 03:57I'd suggest using bucket-types as an additional namespace https://www.tiot.jp/riak-docs/riak/kv/2.2.6/using/reference/bucket-types/#buckets-as-namespaces as this would allow you to give each tenant a namespace bucket-type and then they can create buckets and keys inside that bucket type namespace.
To avoid people fishing on bucket-type to see if they can gain access to a different tenant's data, you can then extend this by creating users and granting exclusive permissions by bucket or bucket type https://www.tiot.jp/riak-docs/riak/kv/2.2.6/using/security/basics/#managing-permissions
Using these two features together should allow a multi-tenant environment within the same Riak KV instance.
QUESTION
I'm trying to install Riak from source on macOS (https://docs.riak.com/riak/kv/2.2.3/setup/installing/mac-osx.1.html#installing-from-source).
There is a note:
Riak will not compile with Clang. Please make sure that your default C/C++ compiler is GCC
How do I find out which compiler is the default and how to change it?
macOS Catalina (10.15.4), which command prints:
...ANSWER
Answered 2020-Apr-20 at 03:11On macOS Catalina (and prior versions, and most likely subsequent versions too), there are two aspects to the problem and some suggested solutions.
What is the name of the compiler used bymake
by default?
QUESTION
I need to clean a few buckets in a massive riak database. For certain buckets, since we had indexes, I just queried those and deleted the keys. But now I'm dealing with two buckets that don't have any indexes. As I read many many times I shouldn't use keys?keys=true
or keys?keys=stream
on production systems, however another way of getting all the keys is to use the $bucket
index, as suggested in the documentation, which doesn't warn of not using this in production. I believe this was also known as $keys
previously. Our system seems to work with either.
However, just before running this on production I've been playing around and found that the $bucket
index returns keys that were deleted, just like keys?keys=true
/stream
, while this was not the case when I used the indexes that we were maintaining ourselves.
Is the $bucket index safe to use in production?
Note that our system runs on the LevelDB backend, which I've been told is bucket scoped, and therefore it would be safe to even run keys?keys=true
/stream
on it. Is this true?
ANSWER
Answered 2020-Apr-15 at 13:07The reality is that there are no guarantees when using $bucket or $key, especially if you combine with Map/Reduce, that they will not impact the stability of the cluster. However, if you know there to be a relatively small number of keys in a bucket, or if you use max_results in your query - then $bucket should be relatively safe (certainly a lot better than using list keys).
There are safer ways of erasing masses of keys in Riak KV 2.9.1, which are available regardless of backend (assuming the use of Tictac AAE).
As for $bucket or $key queries returning deleted keys, I suspect that this may well be true due to the fact that this is an "internal" leveldb query and within leveldb it wan't be able to distinguish between a Riak object and a riak tombstone. The improvements in Riak 2.9.x handle this situation, and won't return deleted keys (unless you're looking for tombstones to reap).
QUESTION
Can anyone please tell me the difference the following metrics in Riak: 1. node_gets vs vnode_gets 2. node_puts vs vnode_puts
As per the documentation, node_gets is the number of gets co-ordinated by a node in the Riak cluster in the last 60 seconds, whereas vnode_gets is the number of gets co-ordinated by vnodes on a particular node. Since vnodes are responsible for managing the partitions and data in a Riak cluster, I am guessing that the node_gets should be a subset of vnode_gets.
If I have to figure out the number of get/put on the cluster by different clients, which among node_gets/vnode_gets and node_puts/vnode_puts should I use?
...ANSWER
Answered 2020-Mar-07 at 00:52When the client sends a get
it goes to a single node that coordinates the get. The node_gets
stat at that node gets incremented.
The node hashes the requested key, looks up the has in the ring, and gets the n_val
(default 3) vnodes that should hold the value. It then forwards the request to the node that owns each of those vnodes. The vnode_gets
stat at each of those nodes is then updated.
So each get from the client should equate to 1 node_get and n_val vnode_gets.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install riak
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page