redis-ha | Reliable , Scalable Redis on OpenShift | Cloud library

 by   openlab-red Shell Version: Current License: MIT

kandi X-RAY | redis-ha Summary

kandi X-RAY | redis-ha Summary

redis-ha is a Shell library typically used in Cloud applications. redis-ha has no bugs, it has no vulnerabilities, it has a Permissive License and it has low support. You can download it from GitHub.

Reliable, Scalable Redis on OpenShift
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              redis-ha has a low active ecosystem.
              It has 15 star(s) with 13 fork(s). There are 2 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              There are 3 open issues and 2 have been closed. There are no pull requests.
              It has a neutral sentiment in the developer community.
              The latest version of redis-ha is current.

            kandi-Quality Quality

              redis-ha has no bugs reported.

            kandi-Security Security

              redis-ha has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.

            kandi-License License

              redis-ha is licensed under the MIT License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              redis-ha releases are not available. You will need to build from source code and install.
              Installation instructions, examples and code snippets are available.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of redis-ha
            Get all kandi verified functions for this library.

            redis-ha Key Features

            No Key Features are available at this moment for redis-ha.

            redis-ha Examples and Code Snippets

            No Code Snippets are available at this moment for redis-ha.

            Community Discussions

            QUESTION

            How do I connect a helm package to a PersistentStorage volume?
            Asked 2020-Jul-09 at 10:27

            I've seen this question come up often, and I've yet to find a clean, generic solution. I'm just learning Kubernetes so maybe there's something basic I'm missing. But here's what I've done:

            1. install docker-desktop with kubernetes
            2. manually create a persistent-storage volume using a yaml file (shown below)
            3. helm install redis dandydev/redis-ha

            Or you can use any other helm chart, be it elasticsearch, postgres, you name it. I always get pod has unbound immediate PersistentVolumeClaims.

            Also when I run: kubectl get storageclasses.storage.k8s.io I do have (default) storage:

            ...

            ANSWER

            Answered 2020-Jul-09 at 10:27

            Ok so I looked more online among the various custom solutions, and one did work: https://github.com/helm/charts/issues/12521#issuecomment-477834805

            In addition this answer provides more details into how to enable dynamic provisioning locally: pod has unbound PersistentVolumeClaims

            Basically (in addition to having the volume created above) I need to manually:

            1. create a storage class, via storage-class.yaml
            2. add that storage class to helm in 'values.yaml'

            Source https://stackoverflow.com/questions/62455233

            QUESTION

            how to uninstall component using helm in kuberetes
            Asked 2020-Jun-04 at 05:49

            I am install promethus-operator using helm v3.2.1 like this:

            ...

            ANSWER

            Answered 2020-Jun-04 at 05:49

            Use below command to view the release name and namespacename

            Source https://stackoverflow.com/questions/62187305

            QUESTION

            Best way to run Redis/Rejson with HA on AWS
            Asked 2020-Jan-10 at 07:16

            As AWS & GCP is not providing managed service for any of the modules of Redis. I am looking forward to running Redis ReJson with HA configuration on AWS.

            Is it best way to set it up on EC2 with RDB backup? How EBS storage will work as i want multi AZ also auto failover.

            Right now somewhere i am planning for deploy it on Kubernetes with helm chart : https://hub.helm.sh/charts/stable/redis-ha

            Which one will be better option to deploy EC2 or Kubernetes ? & How data replication will work in multi-AZ if deployed using EC2 or Kubernetes?

            ...

            ANSWER

            Answered 2020-Jan-10 at 07:16

            RedisLabs provides a managed Redis with modules support on both AWS and GCP.

            See: Cloud PRO https://redislabs.com/redis-enterprise-cloud/

            Source https://stackoverflow.com/questions/59658092

            QUESTION

            Production ready Kubernetes redis
            Asked 2019-Oct-01 at 13:38

            I want to run the Redis with ReJson module on production Kubernetes.

            Right now in staging, I am running single pod of Redis database as stateful sets.

            There is a helm chart available? Can anyone please share it?

            i have tried editing redis/stable and stable/redis-ha with redislabs/rejson imaoge but it's not working.

            What i have did

            ...

            ANSWER

            Answered 2019-Oct-01 at 00:24

            Looks like Helm stable/redis has support to ReJson, as stated in the following PR (#7745):

            This allows stable/redis to offer a higher degree of flexibility for those who may need to run images containing redis modules or based on a different linux distribution than what is currently offered by bitnami.

            Several interesting test cases:

            • [...]
            • helm upgrade --install redis-test ./stable/redis --set image.repository=redislabs/rejson --set image.tag=latest

            The stable/redis-ha also has a PR (#7323) that may make the chart compatible with ReJson:

            This also removes dependencies on very specific redis images thus allowing for use of any redis images.

            Source https://stackoverflow.com/questions/58170021

            QUESTION

            Why clustering on k8s through redis-ha doesn't work?
            Asked 2019-May-24 at 15:45

            I'm trying to create Redis cluster along with Node.JS (ioredis/cluster) but that doesn't seem to work.

            It's v1.11.8-gke.6 on GKE.

            I'm doing exactly what been told in ha-redis docs:

            ...

            ANSWER

            Answered 2019-Apr-28 at 17:48

            Not the best solution, but I figured I can just use Sentinel instead of finding another way (or maybe there is no another way). It has support on most languages so it shouldn't be very hard (except redis-cli, can't figure how to query Sentinel server).

            This is how I got this done on ioredis (node.js, sorry if you not familiar with ES6 syntax):

            Source https://stackoverflow.com/questions/55857202

            QUESTION

            Terraform taint resource naming convention (v0.11.13)
            Asked 2019-May-22 at 22:40

            My module abc contains an instance of redis-ha deployed to Kubernetes via helm compliments of https://github.com/helm/charts/tree/master/stable/redis-ha. I want to taint this resource. When I terraform state list I see the resource listed as:

            • module.abc.module.redis.helm_release.redis-ha[3]

            My understanding from https://github.com/hashicorp/terraform/issues/11570 is that the taint command pre-dates the resource naming convention shown in state list. As of v0.12 it will honour the same naming convention.

            I'm unfortunately not in a position to upgrade to v0.12.

            How do I go about taint-ing the resource module.abc.module.redis.helm_release.redis-ha[3] pre-v0.12?

            I'm happy to taint the entire redis-ha deployment.

            ...

            ANSWER

            Answered 2019-May-22 at 22:40

            In Terraform v0.11 and earlier, the taint command can work with that resource instance like this:

            Source https://stackoverflow.com/questions/56203099

            QUESTION

            On Redis is better to store one key with JSON data as value or multiple keys with single value?
            Asked 2019-Feb-13 at 17:12

            I am developing a web application (Nginx+PHP7.3) that will use Redis database to store some data (mainly to count things) and I have to decide how to store the data. What is important in my case is speed and performance and also keep operations per second low to be able to handle many concurrent connections with the web application.

            Option 1: Store JSON data on a single key

            To save the data I would use a single SET operation, i.e:

            ...

            ANSWER

            Answered 2019-Feb-12 at 20:20

            If you need to update individual fields and re-save it, option 1 isn't ideal because it doesn't handle concurrent writes properly.

            You should be able to use as HASH in Redis, and use HINCRBY to increment individual keys in the hash. Couple that with a pipeline, and you would only make one request to Redis when updating multiple keys.

            You can use HGETALL to get all of the key/value pairs in the hash.

            Source https://stackoverflow.com/questions/54640995

            QUESTION

            Kubernetes init containers run every hour
            Asked 2018-Aug-20 at 03:32

            I have recently set up redis via https://github.com/tarosky/k8s-redis-ha, this repo includes an init container, and I have included an extra init container in order to get passwords etc set up.

            I am seeing some strange (and it seems undocumented) behavior, whereby the init containers run as expected before the redis container starts, however then they run subsequently every hour, close to an hour. I have tested this behavior using a busybox init container (which does nothing) on deployments & statefulset and experience the same behavior, so it is not specific to this redis pod.

            I have tested this on bare metal with k8s 1.6 and 1.8 with the same results, however when applying init containers to GKE (k8s 1.7) this behavior does not happen. I can't see any flags for GKE's kubelet to dictate this behavior.

            See below for kubectl describe pod showing that the init containers are run when the main pod has not exited/crashed.

            ...

            ANSWER

            Answered 2018-Aug-19 at 20:46

            If you are pruning away exited containers, then the container pruning/removal is a likely cause. In my testing, it appears that exited init containers which are removed from Docker Engine (hourly, or otherwise), such as with "docker system prune -f" will cause Kubernetes to re-launch the init containers. Is this the issue in your case, if this is still persisting?

            Also, see https://kubernetes.io/docs/concepts/cluster-administration/kubelet-garbage-collection/ for Kubelet garbage collection documentation, which appears to support these types of tasks (rather than needing to implement it yourself)

            Source https://stackoverflow.com/questions/49247931

            QUESTION

            Golang channels getting stuck
            Asked 2018-Jul-24 at 19:52

            I am working with go and redis to dispatch a queue (channel) of messages to subscribers. I am aiming to create an auto scaling solution that will spawn new go routines (within a certain limit) as the queue gets larger. I have the below code:

            ...

            ANSWER

            Answered 2018-May-04 at 10:59

            Based on this

            I can see from other testing I'm running that this isn't just going slowly and that the messages just aren't being taken from the channel.

            I think you have a live lock in addRedisSender()

            The select statement will pseudo randomly select one of the cases, either the killSwitch case, or the <-messageChannel. Except there's also another case, the default. This will always be true, meaning the for { loop will spin for ever consuming all the resources and causing a live lock as the go runtime tries to schedule more and more competing go routines.

            If you remove the default: continue case then the for loop will block on the select until there's a message to read, or a kill signal.

            Source https://stackoverflow.com/questions/50129226

            QUESTION

            GKE Kubernetes Node Pool Upgrade very slow
            Asked 2018-Mar-22 at 23:40

            I am experimenting with GKE cluster upgrades in a 6 nodes (in two node pools) test cluster before I try it on our staging or production cluster. Upgrading when I only had a 12 replicas nginx deployment, the nginx ingress controller and cert-manager (as helm chart) installed took 10 minutes per node pool (3 nodes). I was very satisfied. I decided to try again with something that looks more like our setup. I removed the nginx deploy and added 2 node.js deployments, the following helm charts: mongodb-0.4.27, mcrouter-0.1.0 (as a statefulset), redis-ha-2.0.0, and my own www-redirect-0.0.1 chart (simple nginx which does redirect). The problem seems to be with mcrouter. Once the node starts draining, the status of that node changes to Ready,SchedulingDisabled (which seems normal) but the following pods remains:

            • mcrouter-memcached-0
            • fluentd-gcp-v2.0.9-4f87t
            • kube-proxy-gke-test-upgrade-cluster-default-pool-74f8edac-wblf

            I do not know why those two kube-system pods remains, but that mcrouter is mine and it won't go quickly enough. If I wait long enough (1 hour+) then it eventually work, I am not sure why. The current node pool (of 3 nodes) started upgrading 2h46 minutes ago and 2 nodes are upgraded, the 3rd one is still upgrading but nothing is moving... I presume it will complete in the next 1-2 hours... I tried to run the drain command with --ignore-daemonsets --force but it told me it was already drained. I tried to delete the pods, but they just come back and the upgrade does not move any faster. Any thoughts?

            Update #1

            The mcrouter helm chart was installed like this:

            helm install stable/mcrouter --name mcrouter --set controller=statefulset

            The statefulsets it created for mcrouter part is:

            ...

            ANSWER

            Answered 2018-Mar-21 at 19:48

            That is a bit complex question and I am definitely not sure that it is like how I thinking, but... Let's try to understand what is happening.

            You have an upgrade process and have 6 nodes in the cluster. The system will upgrade it one by one using Drain to remove all workload from the pod.

            Drain process itself respecting your settings and number of replicas and desired state of workload has higher priority than the drain of the node itself.

            During the drain process, Kubernetes will try to schedule all your workload on resources where scheduling available. Scheduling on a node which system want to drain is disabled, you can see it in its state - Ready,SchedulingDisabled.

            So, Kubernetes scheduler trying to find a right place for your workload on all available nodes. It will wait as long as it needs to place everything you describe in a cluster configuration.

            Now the most important thing. You set that you need replicas: 5 for your mcrouter-memcached. It cannot run more than one replica per node because of podAntiAffinity and a node for a running it should have enough resources for that, which is calculated using resources: block of ReplicaSet.

            So, I think, that your cluster just does not has enough resource for a run new replica of mcrouter-memcached on the remaining 5 nodes. As an example, on the last node where a replica of it still not running, you have not enough memory because of other workloads.

            I think if you will set replicaset for mcrouter-memcached to 4, it will solve a problem. Or you can try to use a bit more powerful instances for that workload, or add one more node to the cluster, it also should help.

            Hope I gave enough explanation of my logic, ask me if something not clear to you. But first please try to solve an issue by provided solution:)

            Source https://stackoverflow.com/questions/49412702

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install redis-ha

            Create image stream and build config.
            Create image stream and build config oc process -f build/redis-build.yml \ -p REDIS_IMAGE_NAME=redis-ha \ -p GIT_REPO=https://github.com/openlab-red/redis-ha.git \ | oc create -f -
            Start the build oc start-build redis-ha-build
            For a recommended setup that can resist more failures, set the replicas to 5 (default) for Redis and Sentinel. With 5 or 6 sentinels, a maximum of 2 can go down for a failover begin. With 7 sentinels, a maximum of 3 nodes can go down.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/openlab-red/redis-ha.git

          • CLI

            gh repo clone openlab-red/redis-ha

          • sshUrl

            git@github.com:openlab-red/redis-ha.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link

            Explore Related Topics

            Consider Popular Cloud Libraries

            Try Top Libraries by openlab-red

            hashicorp-vault-for-openshift

            by openlab-redPython

            openshift-management

            by openlab-redShell

            vault-secret-fetcher

            by openlab-redGo

            quarkus-mtls-quickstart

            by openlab-redJava