dis.io | browser based distributed computing platform | Storage library
kandi X-RAY | dis.io Summary
kandi X-RAY | dis.io Summary
dis.io is a distributed computing platform that utilises idle CPU cycles from within a web browser or the command line, built purely on JavaScript and node.js. The name dis.io descends from two origins. The first is that is was built as part of my Dissertation for my Degree in Computing BSc at Bournemouth University, however the other origin, which I personally think it is much better at describing the project is because it is a distributed computing platform.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of dis.io
dis.io Key Features
dis.io Examples and Code Snippets
Community Discussions
Trending Discussions on dis.io
QUESTION
Redis RPUSH docs here suggest that the return value of RPUSH
is the length of the list after the push operation.
However, what's not clear to me is:
- Is the result of
RPUSH
the length of the list after the push operation atomically, (so the result is definitely the index of the last item just added byRPUSH
) or... - Is it possible other
RPUSH
operations from concurrent Redis clients could have executed before theRPUSH
returns, so that you are indeed getting the new length of the list, but that length includes elements from otherRPUSH
commands?
Thanks!
...ANSWER
Answered 2021-Jun-03 at 22:18The operation is atomic, so the result of the RPUSH
is indeed the length of the list after the operation.
However, by the time you get the result on the client, the list could have changed in arbitrary ways, since other clients could have pushed items, popped items, etc. So that return value really doesn't tell you anything about the state of the list by the time that you receive it on the client.
If it's important to you that the return value match the state of the list, then that implies that you have a sequence of operations that you want to be atomic, in which case you can use Redis' transaction facilities. For example, if you performed the RPUSH
in a Lua script, you could be sure that the return value represented the state of the list, since the entire script would execute as a single atomic operation.
QUESTION
Recently I have been using spring-boot-starter-data-redis;
I use spring-boot-version:2.3.8.RELEASE;
application.yml
...ANSWER
Answered 2021-May-14 at 02:32RedisSerializer serialize the key "name" to other key, so redisTemplate doesn't seem to work;
The key code is RedisSerializer;
QUESTION
I am trying to use redis with tls with a netcore application and I get an authentication error
The Setup: Docker:I created a redis docker container using redis:6.2.0
docker-compose.yaml:
...ANSWER
Answered 2021-May-11 at 10:27For any one facing the same issue, it seems the server was using a non routed CA for the server certificates, the solution I found was to use the CertificateValidation callback of StackExchange.Redis library with the following code
QUESTION
I am trying to create a Galaxy role for our org's internal galaxy, which I am testing first locally. In our org we use a common list of defaults across all roles.
Ansible is throwing me a "The task includes an option with an undefined variable The error was: 'redis_download_url' is undefined" error when running my playbook, despite me having defined the variable in defaults/main.yml:
...ANSWER
Answered 2021-May-09 at 12:12As per documentation:
If you include a task file from a role, it will NOT trigger role behavior, this only happens when running as a role, include_role will work.
To get the role functionality of reading variables from defaults/main.yml
, you'll need to use include_role
or roles: []
.
QUESTION
Context:
I'm trying to understand the motivation behind existence of WaitBeforeForcingMasterFailover
property (and the code associated with it) inside of ServiceStack.Redis.RedisSentinel
.
If I interpreted the code right - the meaning behind this property seems to cover cases like:
- We have a connection to a healthy sentinel that tells us that a master is at host X
- When we try to establish a connection to the master at host X - we fail due to some reason.
So the logic will be - if we continuously fail to create a connection to X for WaitBeforeForcingMasterFailover
period - initiate a force failover.
The failover does not need to reach a quorum and can elect a new master just with 1 sentinel available.
SENTINEL FAILOVER Force a failover as if the master was not reachable, and without asking for agreement to other Sentinels (however a new version of the configuration will be published so that the other Sentinels will update their configurations).
Source: https://redis.io/topics/sentinel#sentinel-api
The way it seems to me - this feature can be beneficial in some cases and troublesome in other cases.
For example in case of a network partition if a client is left connected to a minority of sentinels (they can't reach a quorum) and these sentinels point to a master that is no longer reachable - this force failover option will trigger a failover within reachable partition, thus potentially creating a split brain situation.
Coming from Java background I also haven't seen such features available in popular redis clients such as Jedis and Lettuce.
This got me wondering on the following questions:
Are there strong reasons for this feature to be enabled by default? (I understand that you can effectively disable it if you want to by setting a huge value in it). Do they really worth the risk of interfering with natural sentinels workflow and potentially introducing problems like the one I've mentioned before?
Will the library work fine with this option disabled? Are there are cases that I might have missed and turning this feature off will lead to problems even with some happy paths (no network partition, just regular failovers because of a deployment or a sudden node failure)?
ANSWER
Answered 2021-Apr-30 at 04:41It's a fallback that if RedisSentinel
is unable to establish a connection to a master client within 60s (default) it will instruct the connected sentinel to force a failover.
You can increase the wait time when configuring RedisSentinel
:
QUESTION
I run redis image with docker-compose
I passed redis.conf (and redis says "configuration loaded")
In redis.conf i added user
ANSWER
Answered 2021-Apr-28 at 23:59And yet I can communicate with redis as anonymous even with uncommented string
Because there's a default user, and you didn't disable it. If you want to totally disable anonymous access, you should add the following to your redis.conf:
QUESTION
I have a question about when it makes sense to use a Redis cluster versus standalone Redis.
Suppose one has a real-time gaming application that will allow multiple instances of the game and wish to implement real time leaderboard for each instance. (Games are created by communities of users).
Suppose at any time we have say 100 simultaneous matches running.
Based on the use cases outlined here :
https://d0.awsstatic.com/whitepapers/performance-at-scale-with-amazon-elasticache.pdf
https://redislabs.com/solutions/use-cases/leaderboards/
We can implement each leaderboard using a Sorted Set dataset in memory.
Now I would like to implement some sort of persistence where leaderboard state is saved at the end of each game as a snapshot. Thus each of these independent Sorted Sets are saved as a snapshot file.
I have a question about design choices:
- Would a redis cluster make sense for this scenario ? Or would it make more sense to have standalone redis instances and create a new database for each game ?
As far as I know there is only a single database 0 for a single redis cluster.(https://redis.io/topics/cluster-spec) In that case, how would one be able to snapshot datasets for each leaderboard at different times work ?
https://redis.io/topics/cluster-spec
From what I can see using a Redis cluster only makes sense for large-scale monolithic applications and may not be the best approach for the scenario described above. Is that the case ?
Or if one goes with AWS Elasticache for Redis Cluster mode can I configure snapshotting for individual datasets ?
...ANSWER
Answered 2021-Apr-27 at 04:52You are correct, clustering is a way of scaling out to handle really high request loads and store tons of data.
It really doesn't quite sound like you need to bother with a cluster. I'd quite be very surprised if a standalone Redis setup would be your bottleneck before having several tens of thousands of simultaneous players.
If you are unsure, you can probably mock some simulated load and see what it can handle. My guess is that you are better off focusing on other complexities of your game until you start reaching quite serious usage. Which is a good problem to have. :)
You might however want to consider having one or two replica instances, which is a different thing.
Secondly, regardless of cluster or not, why do you want to use snap-shots (SAVE
or BGSAVE
) to persist your scoreboard?
If you want to have individual snapshots per game, and its only a few keys per game, why don't you just have your application read and persist those keys when needed to a traditional db? You can for example use MULTI
, DUMP
and RESTORE
to achieve something that is very similar to snapshotting, but on the specific keys you want.
It doesn't sound like multiple databases is warranted for this.
Multiple databases on clustered Redis is only supported in the Enterprise version, so not on ElastiCache. But the above mentioned approach should work just fine.
QUESTION
I am having a redis data store in which there are unique keys stored. Now my app server will send multiple requests to redis to get some 100 keys from start and I am planning to use LRANGE command for the same.
But my requirement is that each request should receive unique set of keys,which means that if one request goes to redis for 100 keys then those keys will never be returned to any request in future.
As I saw that redis operations are atomic, so can i assume that if there multiple requests coming from app server at same time to redis, as redis is single thrreaded, so it will execute LRANGE mylist 0 100
and once it is completed (means once 100 keys taken and removes from List), only then next request will be processed, so atomicity is inbuild,is it correct?
Is it ever possible under any circumstance that two requests can get same 100 keys?
ANSWER
Answered 2021-Apr-03 at 21:19QUESTION
i've read this article(https://redis.io/topics/partitioning#why-partitioning-is-useful) about redis partitioning
i could find that redis offers two partitioning method options, range partitioning and hash partitioning.
for me, range partitioning seems no good compare to hash partitioning in every aspect.
I think range partitioning has some good points but I don't know what they are.
please give me some ideas about it.
...ANSWER
Answered 2021-Mar-16 at 12:17Redis does not offer range partitioning.
Redis offers hash partitioning but only in cluster mode.
The article you mentioned just gives basic idea about how users can use different partitioning scheme (not only range partitioning but others as well) to distribute their data.
QUESTION
I am trying to use Redis Rate limiting patter as specified in https://redis.io/commands/incr under "Pattern: Rate limiter 1". But how can I scale this in case I want to do rate limiting across multiple servers. Like I have service deployed across 5 servers behind load balancer and I want total requests per api key across 5 servers should not cross x/sec. As per redis pattern I mentioned , the problem is that if I have my rate limiter running in multiple servers itself, then the two different request to two different rate limiter servers,can do "get key" at same time and read same value, before anyone updates it, which can probably allow more requests to go.How can I handle this?I can obviously put get in MULTI block, but I think it will make things a lot more slow.
...ANSWER
Answered 2021-Feb-13 at 16:20You need to run LUA script that will check rate-limiting and increase/decrease/reset the counter(s).
You can find a simple example in Larval framework here
https://github.com/laravel/framework/blob/8.x/src/Illuminate/Redis/Limiters/DurationLimiter.php
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install dis.io
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page