evalsha | a hosted redis script site | Database library
kandi X-RAY | evalsha Summary
kandi X-RAY | evalsha Summary
A content-addressable database of Lua Scripts for the Redis database.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of evalsha
evalsha Key Features
evalsha Examples and Code Snippets
Community Discussions
Trending Discussions on evalsha
QUESTION
I'd like to call a custom EVAL function from within postgres to a redis instance (so I'm not sure if redis_fdw will work in this case.)
The other option is to use plpython
with a redis library
https://pypi.org/project/redis/.
ANSWER
Answered 2021-Feb-24 at 07:48redis_fdw only mentions support for a handful of built-in data structures, so it doesn't look like much help for an EVAL
.
A global connection pool will probably not be easy. Postgres runs each connection in its own server process, and you can't pass anything between them without allocating it in shared memory (which I don't see a PL/Python API for, so I think it'd need to be done in C).
But if you can afford to create one Redis connection per Postgres connection, you should be able to reuse it across PL/Python calls by putting it in a shared dictionary:
The global dictionary
SD
is available to store private data between repeated calls to the same function. The global dictionaryGD
is public data, that is available to all Python functions within a session; use with care.
So, something like
QUESTION
I'm trying to power some multi-selection query & filter operations with SCAN
operations on my data and I'm not sure if I'm heading in the right direction.
I am using AWS ElastiCache (Redis 5.0.6).
Key design: :::
Example:
13434:Guacamole:Dip:Mexico
34244:Gazpacho:Soup:Spain
42344:Paella:Dish:Spain
23444:HotDog:StreetFood:USA
78687:CustardPie:Dessert:Portugal
75453:Churritos:Dessert:Spain
If I want to power queries with complex multi-selection filters (example to return all keys matching five recipe types from two different countries) which the SCAN
glob-style match pattern can't handle, what is the common way to go about it for a production scenario?
Assuming the I will calculate all possible patterns by doing a cartesian product of all field alternating patterns and multi-field filters:
[[Guacamole, Gazpacho], [Soup, Dish, Dessert], [Portugal]]
*:Guacamole:Soup:Portugal
*:Guacamole:Dish:Portugal
*:Guacamole:Dessert:Portugal
*:Gazpacho:Soup:Portugal
*:Gazpacho:Dish:Portugal
*:Gazpacho:Dessert:Portugal
What mechanism should I use to implement this sort of pattern matching in Redis?
- Do multiple
SCAN
for each scannable pattern sequentially and merge the results? - LUA script to use improved pattern matching for each pattern while scanning keys and get all matching keys in a single
SCAN
? - An index built on top of sorted sets supporting fast lookups of keys matching single fields and solve matching alternation in the same field with
ZUNIONSTORE
and solve intersection of different fields withZINTERSTORE
?
:: => key1, key2, keyN
:: => key1, key2, keyN
:: => key1, key2, keyN
- An index built on top of sorted sets supporting fast lookups of keys matching all dimensional combinations and therefore avoiding Unions and Intersecions but wasting more storage and extend my index keyspace footprint?
:: => key1, key2, keyN
:: => key1, key2, keyN
:: => key1, key2, keyN
:: => key1, key2, keyN
:: => key1, key2, keyN
:: => key1, key2, keyN
- Leverage RedisSearch? (while impossible for my use case, see Tug Grall answer which appears to be very nice solution.)
- Other?
I've implemented 1) and performance is awful.
...ANSWER
Answered 2020-Sep-28 at 10:20I would vote for option 3, but I will probably start to use RediSearch.
Also have you look at RediSearch? This module allows you to create secondary index and do complex queries and full text search.
This may simplify your development.
I invite you to look at the project and Getting Started.
Once installed you will be able to achieve it with the following commands:
QUESTION
When I was using lock.release
from fakeredis
lib, I got below exception:
ANSWER
Answered 2020-Jun-10 at 18:12As mentioned in the doc,
Although fakeredis is pure Python, you will need
lupa
if you want to run Lua scripts (this includes features like redis.lock.Lock, which are implemented in Lua). If you installfakeredis
withpip install fakeredis[lua]
it will be automatically installed.
So:
QUESTION
Possibly related to - Redis command to get all available keys?
I have a Redis server storing SixPack data (https://github.com/sixpack/sixpack - a framework to enable A/B testing). I can monitor the Redis server and see the following sample entries when I run monitor command:
...ANSWER
Answered 2020-Feb-14 at 23:45Based on the hint from @AlisterBulman, I ran the INFO command like this.
QUESTION
I'm trying to work abound with the bull package with Redis on windows, my server is up and running but when I try to access the job it gives me an error, my code so far.
...ANSWER
Answered 2019-Sep-18 at 12:20As I can see, You are using redis 2.4.6
but according to the doc:
QUESTION
Here are some tests and results I have run against the redis-benchmark tool.
...ANSWER
Answered 2020-Jan-06 at 01:41Why Lua script is slower in your case?
Because EVALSHA
needs to do more work than a single JSON.SET
or SET
command. When running EVALSHA
, Redis needs to push arguments to Lua stack, run Lua script, and pop return values from Lua stack. It should be slower than a c function call for JSON.SET
or SET
.
So When does server side script has a performance advantage?
First of all, you must run more than one command in script, otherwise, there won't be any performance advantage as I mentioned above.
Secondly, server side script runs faster than sending serval commands to Redis one-by-one, get the results form Redis, and do the computation work on the client side. Because, Lua script saves lots of Round Trip Time.
Thirdly, if you need to do really complex computation work in Lua script. It might not be a good idea. Because Redis runs the script in a single thread, if the script takes too much time, it will block other clients. Instead, on the client side, you can take the advantage of multi-core to do the complex computation.
QUESTION
In the creation of a custom adapter for a said DB that will work correctly with the Microsoft Bot framework I am planning on creating something analogous to the cosmosDBPartitionedStorage class in the bot framework.
From what I see there are 3 operations of read, write, and delete that are inherited/implemented from botbuilder storage.
Is there anything from the database perspective that has to be considered in creating this adapter that isn't apparent from reading through a couple layers of source code. Such as initialization() for example. That is cosmos specific and how should I interpret that for the solution I need?
I plan on using 2 databases which of one is redis. I can test this in an Azure redis instance for my local development and I think that is a good place to get started. So in short, this would be for a redis adapter initially.
Update: I went with a Redis only cluster solution and it's solid. I was not able to achieve the concurrency checking because that would have to be a server side script, which I am using them for my CRUD operations, that will be more involved in a v2 update.
The help @mrichardson gave in the answer below was invaluable to creating your own data store. I was able to get most of the important base tests working in the unit testing for my TypeScript implementation too! All but the concurrency test.
In using Redis I was able to create an adapter that is compatible with JSON via the RedisJson module. This is a Redis module you have to install via your cmd or conf file configuration.
The library I went with was IORedis from Luin and the learning curve was steep not necessarily his library but the integration with what Redis does along with his library and being a cluster AND using the RedisJson module integration was a nice challenge!
Because of going with the RedisJson module I had to use LUA scripts load
EVALSHA
for every CRUD operation that falls back to EVAL
if the script hasn't been loaded or is missing of any reason. It also reestablishes the script upon that failure.
I'm not sure if there is a great performance gain for using EVALSHA LUA scripting for just read and write operations but the Redis documentation seems to suggest it does.
A big advantage of scripting is that it is able to both read and write data with minimal latency, making operations like read, compute, write very fast (pipelining can't help in this scenario since the client needs the reply of the read command before it can call the write command).
However more importantly, the reason why I went with scripting in the first place was more to do with the IORedis client. It does pipelining but since there isn't native support for RedisJson commands I had to either do a custom script (in IORedis which doesn't allow pipelining but does the evalhas fallback for you) or create my own EVALSHA
to EVAL
fallback scenario.
Seems to work pretty awesome!
The codebase is for a RedisCluster and once I am finished putting a few tweaks on it I will post it as a typescript npm package via github and npm.
The inputs also take in a TTL setting which is a good security & performance abstraction for a messaging application such as is the Microsoft bot framework.
...ANSWER
Answered 2019-Dec-11 at 18:00From what I see there are 3 operations of read, write, and delete that are inherited/implemented from botbuilder storage.
Correct. That's all you really need. So long as you can do those successfully, it will work just fine.
Is there anything from the database perspective that has to be considered in creating this adapter that isn't apparent from reading through a couple layers of source code. Such as initialization() for example. That is cosmos specific and how should I interpret that for the solution I need?
Also correct. That is Cosmos-specific. Basically, it:
- Creates database if it doesn't exist
- Stores existing/created database as a property of the class so that when it later checks for the existence of the database, it just looks for the class property and doesn't need to make an HTTP request
- Locks the class while doing the above to prevent concurrency issues
You would want something like the initialization()
function if you want to first check that the database exists prior to trying any kind of read/write. It's probably good practice to have something like this to future-proof your bot (if you change/add databases or something), but isn't required.
this would be for a redis adapter initially.
Unfortunately, we don't have any Redis storage adapters, but here's some additional storage adapters you can look at when you build yours:
- BF SDK Node Azure Adapters (currently Azure Blob, Cosmos)
- BF Node MemoryStorage (don't use in production)
- BF SDK C# Azure Adapters (currently Azure Blob, Cosmos)
- BF C# MemoryStorage (don't use in production)
- Community Repo Node Adapters (currently Azure-Table, MongoDb, MS SQL)
- Community Repo C# Adapters (currently Elasticsearch and EntityFramework)
When writing yours, if you want to make sure it works appropriately, we have a set of Storage Base Tests you can use. Your adapter should pass all of them.
QUESTION
When I run redis-cli script load "$(cat ./scripts/restoreSymbols.lua)"
for the following script:
ANSWER
Answered 2020-Jan-01 at 14:31The --evalsha
flag is not a valid redis-cli
option. You can use --eval
to run your script like
QUESTION
If you have ~50 events/second, each event should be handled transactionaly (make 3 SADD operations), what is better:
- Run one Lua script (via EVALSHA) per event?
- Run single Lua script that will iterate all events and update them at once?
My considerations: single EVAL will be at least not slower than EVAL-per-event. The main cocern is script execution time. AFAIK, it sohuld block all operations accross all Redis namespaces. But I suppose I shouldn't be afraid of 150 SADD operations inside one EVAL, right?
...ANSWER
Answered 2019-Aug-30 at 15:02You'd better do some benchmark test with your production environment, although I think 150 operations are too many to block Redis for a while.
In fact, you have another alternative: run 50 Lua scripts in a pipeline. With pipeline, each of your Lua script won't block Redis for a long time, i.e. only 3 operations, and it saves lots of RTT, and should be much faster than 50 EVALSHA
commands.
QUESTION
I need to be able to make a transaction in redis that does the following:
- decrement n value if and only if the result is > 0
- otherwise, do nothing
- deal with arbitrary precision decimal numbers (I need them in a float format)
- be accessible to other processes
Simpler put, it's a "Balance": If I have enough in this field, I can use it, otherwise, no. Sometime, it must decrement many balances
To do this, I made a LUA Script that calculates the result of the decrementation, then modifies the fields with this result. I chose this solution, because:
- it's atomic
- the simpler INCRBYFLOAT does the subtraction no matter the result and doesn't seems have the proper precision
- Used the LUA library http://oss.digirati.com.br/luabignum/
The problems I'm facing:
- The lib used doesn't fit: It's only for integers, and it's too big to send each time (event with evalsha, it's slow)
- How to include third party library when programming Lua script in Redis => following that, I'm pretty stuck concerning the usage of additionnal modules on redis. However, it's from the past, now. How are things now ?
- I'm not event sure if there is a more efficient way to do that ? Any advices on the code itself are welcomed
- Is Redis really a way to fullfill my needs ?
The input, "values" is the following format: Array<{ key: string, field: string, value: string // this is actually a BigNumber, with a string format }>
...ANSWER
Answered 2019-May-13 at 21:22Maybe getting inspired by event sourcing pattern is something that can solve your problem. Also another way to achieve atomicity is to limit the writing role to only 1 processor whose commands will always be time ordered. (just like redis with lua)
1) You send to redis "events" of balance change stored in a sorted set (for time ordering, timestamp being the score). Only store the "command" you want to do (not the result of the computation). For instance "-1.545466", "+2.07896" etc...
2) Then you consume these events via a Lua script from a single processor (you must be sure that there is only one compute item that accesses this data or you will be in trouble) which can be called with a loop that calls the script every n seconds (you can define your real time quality) ala Apache Storm for instance (a "spout"). The script should return the events from the oldest timestamp up to the latest timestamp, timestamps (scores) should be returned as well (without them you will loose "index") and of course the actual balance.
You should get values that look like:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install evalsha
On a UNIX-like operating system, using your system’s package manager is easiest. However, the packaged Ruby version may not be the newest one. There is also an installer for Windows. Managers help you to switch between multiple Ruby versions on your system. Installers can be used to install a specific or multiple Ruby versions. Please refer ruby-lang.org for more information.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page