redis-enterprise-k8s-docs | page describes how to deploy Redis Enterprise | DevOps library
kandi X-RAY | redis-enterprise-k8s-docs Summary
kandi X-RAY | redis-enterprise-k8s-docs Summary
This page describes how to deploy Redis Enterprise on Kubernetes using the Redis Enterprise Operator. The Redis Enterprise Operator supports two Custom Resource Definitions (CRDs):.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Run metrics on the given namespaces
- Run a shell command
- Return a list of non - existing namespaces
- Returns a list of namespaces to run on
- Collect data from a namespace
- Collect API resources
- Run a shell command with retries
- Runs a get_resource command to get a resource
- Return a list of pod names that are ready to be used
- Get k8s pods
redis-enterprise-k8s-docs Key Features
redis-enterprise-k8s-docs Examples and Code Snippets
Community Discussions
Trending Discussions on Devops
QUESTION
ci, and i-ve installed my gitlab-runner on a ec2 machine Ubuntu Server 18.04 LTS t2.micro, and when im pushing my code to start the build i get this
But it keeps stucked like this and after 1 hour it timeouts
I really don't know what to do about this problem knowing that i can clone successfully the project manually in my ec2 machine.
Any help is much appreciated if you ever encountered this problem and thanks in advance.
...ANSWER
Answered 2022-Mar-22 at 08:28check your job config or your timeout
QUESTION
I'm trying to use Podman to build an image of a Spring Boot project in IntelliJ. Jetbrain's guide suggests to "Select TCP socket and specify the Podman API service URL in Engine API URL" within Build,Execution,Deployment > Docker (see https://www.jetbrains.com/help/idea/podman.html).
However, when giving the TCP socket found on Podman's documentation (see https://docs.podman.io/en/latest/markdown/podman-system-service.1.html), IntelliJ says it cannot connect.
Finally, when here is the error that appears in terminal:
...ANSWER
Answered 2022-Mar-17 at 16:22Facing the same problem due to podman version upgrade.
Seems like a version downgrade would be required to recover the containers, but haven't tried it yet.
This issue points on deleting the machine and creating it again, but the containers would be lost
https://github.com/containers/podman/issues/13510
QUESTION
I was recently trying to create a docker container and connect it with my SQLDeveloper but I started facing some strange issues. I downloaded the docker image using below pull request:
...ANSWER
Answered 2021-Sep-19 at 21:17There are two issues here:
- Oracle Database is not supported on ARM processors, only Intel. See here: https://github.com/oracle/docker-images/issues/1814
- Oracle Database Docker images are only supported with Oracle Linux 7 or Red Hat Enterprise Linux 7 as the host OS. See here: https://github.com/oracle/docker-images/tree/main/OracleDatabase/SingleInstance
Oracle Database ... is supported for Oracle Linux 7 and Red Hat Enterprise Linux (RHEL) 7. For more details please see My Oracle Support note: Oracle Support for Database Running on Docker (Doc ID 2216342.1)
The referenced My Oracle Support Doc ID goes on to say that the database binaries in their Docker image are built specifically for Oracle Linux hosts, and will also work on Red Hat. That's it.
Linux being what it is (flexible), lots of people have gotten the images to run on other flavors like Ubuntu with a bit of creativity, but only on x86 processors and even then the results are not guaranteed by Oracle: you won't be able to get support or practical advice when (and it's always when, not if in IT) things don't work as expected. You might not even be able to tell when things aren't working as they should. This is a case where creativity is not particularly rewarded; if you want it to work and get meaningful help, my advice is to use the supported hardware architecture and operating system version. Anything else is a complete gamble.
QUESTION
I would like to add to Gitlab pipeline a stage which verifies that the person approving the MR is different from the person doing the creation/merge (for this to work, I checked the setting in Gitlab that says: "Pipelines must succeed").
...ANSWER
Answered 2022-Feb-12 at 00:24To avoid duplicate pipelines:
QUESTION
I'm currently setuping a CI/CD pipeline in Azure Devops to deploy a NodeJS app on a linux hosted app service (not a VM).
My build and deploy both go smoothly, BUT I need to make sure some packages are installed in the environment after the app has been deployed.
The issue is: whatever apt-get
script I create after the deploy, I have to run then manually for them to actually take effect. In the Pipeline log they seem to have been executed, though.
Here is the part of my yaml code responsible for the deploy, did I miss something?
...ANSWER
Answered 2022-Jan-26 at 16:26For now, went with a "startup.sh" file which I run manually after each deploy. Gonna go through docker later though
QUESTION
I try to make a pretty basic GitLab CI job.
I want:
When I push to develop, gitlab builds docker image with tag "develop"
When I push to main, gitlab checks that current commit has tag, and builds image with it or job is not triggered.
ANSWER
Answered 2022-Jan-24 at 19:45Gitlab CI/CD has multiple 'pipeline sources', and some of the Predefined Variables only exist for certain sources.
For example, if you simply push a new commit to the remote, the value of CI_PIPELINE_SOURCE
will be push
. For push
pipelines, many of the Predefined Variables will not exist, such as CI_COMMIT_TAG
, CI_MERGE_REQUEST_SOURCE_BRANCH_NAME
, CI_EXTERNAL_PULL_REQUEST_SOURCE_BRANCH_NAME
, etc.
However if you create a Git Tag either in the GitLab UI or from a git push --tags
command, it will create a Tag pipeline, and variables like CI_COMMIT_TAG
will exist, but CI_COMMIT_BRANCH
will not.
One variable that will always be present regardless what triggered the pipeline is CI_COMMIT_REF_NAME
. For Push sources where the commit is tied to a branch, this variable will hold the branch name. If the commit isn't tied to a branch (ie, there was once a branch for that commit but now it's been deleted) it will hold the full commit SHA. Or, if the pipeline is for a tag, it will hold the tag name.
For more information, read through the different Pipeline Sources (in the description of the CI_PIPELINE_SOURCE
variable) and the other variables in the docs linked above.
What I would do is move this check to the script
section so we can make it more complex for our benefit, and either immediately exit 0
so that the job doesn't run and it doesn't fail, or run the rest of the script.
QUESTION
Since Tomcat just unzips the EAR WAR to the filesystem to serve the app, what is the benefit of using an EAR WAR and what are the drawbacks to just pushing a filesystem to the Tomcat webapps filesystem?
ANSWER
Answered 2021-Dec-25 at 11:32Tomcat supports WAR but not EAR. Anyways , I think your question is about why we normally deploy the application that is packaged as a single WAR rather than the exploded WAR (i.e exploded deployment).
The main advantages for me are :
It is more easy to handle the deployment when you just need to deploy one file versus deploying many files in the exploded WAR deployment.
Because there is only one file get deployed , we can always make sure the application is running in a particular version. If we allow to deploy individual files and someone just update several files to other version , it is difficult to tell what is the exact version that the application is running.
There are already some discussion about such topics before , you can refer this and this for more information.
QUESTION
I have a CI setup using github Action/workflow to run cypress automated test everytime when a merge is done on the repo. The installation steps works fine however i run into issue when executing cypress command, let me show you the code.
CI pipeline in .github/workflows
...ANSWER
Answered 2021-Dec-30 at 16:53After searching for some time turns out i was using cypress 8.7.0 which was causing the issue, i downgraded to cypress 8.5.0 and it started working, hope that helps anyone else having this issue
QUESTION
I just created a pipeline using the YAML file and I am always getting the error "/_Azure-Pipelines/templates/webpart.yml: (Line: 41, Col: 27, Idx: 1058) - (Line: 41, Col: 60, Idx: 1091): While parsing a block mapping, did not find expected key.". I already verified the indentation of my YAML file and that looks fine.
Below is my YAML file.
...ANSWER
Answered 2021-Dec-07 at 10:42It was due to a missing quotation mark in the task PublishBuildArtifacts@1
for the PathtoPublish
. I found this error by using a YAML
extension provided by RedHat
.
Once you enabled that extension and set the formatted for YAML (SHIFT + ALT + F), it should show you the errors in your YAML file.
QUESTION
name: deploy-me
on: [push]
jobs:
deploys-me:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-node@v2
with:
node-version: '14'
- run: npm install
- run: npm run dev
//Next I want to copy some file from this repo and commit to a different repo and push it
...ANSWER
Answered 2021-Dec-05 at 09:57name: deploy-me
'on':
- push
jobs:
deploy-me:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions/setup-node@v2
with:
node-version: '14'
env:
ACCESS_TOKEN: '${{ secrets.ACCESS_TOKEN }}'
- run: npm install
- run: npm run build
- run: |
cd lib
git config --global user.email "xxx@gmail.com"
git config --global user.name "spark"
git config --global credential.helper cache
git clone https://${{secrets.ACCESS_TOKEN}}@github.com/sparkdevv/xxxxxx
cp index.js clonedFolder/ -f
cd clonedFolder
git add .
git commit -m "$(date)"
git push
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install redis-enterprise-k8s-docs
Create a new namespace: Note: For the purpose of this doc, we'll use the name "demo" for our cluster's namespace. kubectl create namespace demo Switch context to the newly created namespace: kubectl config set-context --current --namespace=demo
Deploy the operator bundle To deploy the default installation with kubectl, the following command will deploy a bundle of all the yaml declarations required for the operator: kubectl apply -f bundle.yaml Alternatively, to run each of the declarations of the bundle individually, run the following commands instead of the bundle: kubectl apply -f role.yaml kubectl apply -f role_binding.yaml kubectl apply -f service_account.yaml kubectl apply -f crds/v1/rec_crd.yaml kubectl apply -f crds/v1alpha1/redb_crd.yaml kubectl apply -f admission-service.yaml kubectl apply -f operator.yaml Run kubectl get deployment and verify redis-enterprise-operator deployment is running. A typical response may look like this: NAME READY UP-TO-DATE AVAILABLE AGE redis-enterprise-operator 1/1 1 1 2m
Redis Enterprise Cluster custom resource - RedisEnterpriseCluster Create a RedisEnterpriseCluster(REC) using the default configuration, which is suitable for development type deployments and works in typical scenarios. The full list of attributes supported through the Redis Enterprise Cluster (REC) API can be found HERE. Some examples can be found in the examples folder. kubectl apply -f examples/v1/rec.yaml Note: The Operator can only manage one Redis Enterprise Cluster custom resource in a namespace. To deploy another Enterprise Clusters in the same Kubernetes cluster, deploy an Operator in an additional namespace for each additional Enterprise Cluster required. Note that each Enterprise Cluster can effectively host hundreds of Redis Database instances. Deploying multiple clusters is typically used for scenarios where complete operational isolation is required at the cluster level.
Run kubectl get rec and verify creation was successful. rec is a shortcut for RedisEnterpriseCluster. The cluster takes around 5-10 minutes to come up. A typical response may look like this: NAME AGE rec 5m Note: Once the cluster is up, the cluster GUI and API could be used to configure databases. It is recommended to use the K8s REDB API that is configured through the following steps. To configure the cluster using the cluster GUI/API, use the ui service created by the operator and the default credentials as set in a secret. The secret name is the same as the cluster name within the namespace.
Redis Enterprise Database (REDB) Admission Controller: The Admission Controlller is recommended for use. It uses the Redis Enterprise Cluster to dynamically validate that REDB resources as configured by the operator are valid. Steps to configure the Admission Controller: Note: Redis Labs' Redis Enterprise Operator can also be installed through the Gesher Admission Proxy Wait for the secret to be created: kubectl get secret admission-tls NAME TYPE DATA AGE admission-tls Opaque 2 2m43s Enable the Kubernetes webhook using the generated certificate Save the certificate into a local environmental variable CERT=`kubectl get secret admission-tls -o jsonpath='{.data.cert}'` Create a patch file sed 's/NAMESPACE_OF_SERVICE_ACCOUNT/demo/g' admission/webhook.yaml | kubectl create -f - cat > modified-webhook.yaml <<EOF webhooks: - name: redb.admission.redislabs clientConfig: caBundle: $CERT admissionReviewVersions: ["v1beta1"] EOF Patch the validating webhook with the certificate (caBundle) kubectl patch ValidatingWebhookConfiguration redb-admission --patch "$(cat modified-webhook.yaml)" Note: If you're not using multiple namespaces you may skip to "Verify the installation" step. Note: If you're not using multiple namespaces you may proceed to step 6. Limiting the webhook to the relevant namespaces: Unless limited, webhooks will intercept requests from all namespaces. In case you have several REC objects on your K8S cluster you need to limit the webhook to the relevant namespace. This is done by adding a namespaceSelector to the webhook spec that targets a label found on the namespace. First, make sure you have such a relevant label on the namespace and that it is unique for this namespace. e.g. apiVersion: v1 kind: Namespace metadata: labels: namespace-name: staging name: staging Then, patch the webhook with a namespaceSelector. See this example: cat > modified-webhook.yaml <<EOF webhooks: - name: redb.admission.redislabs namespaceSelector: matchLabels: namespace-name: staging EOF apply the patch: kubectl patch ValidatingWebhookConfiguration redb-admission --patch "$(cat modified-webhook.yaml)" Verify the installation In order to verify that the all the components of the Admission Controller are installed correctly, we will try to apply an invalid resource that should force the admission controller to reject it. If it applies succesfully, it means the admission controller has not been hooked up correctly. $ kubectl apply -f - << EOF apiVersion: app.redislabs.com/v1alpha1 kind: RedisEnterpriseDatabase metadata: name: redis-enterprise-database spec: evictionPolicy: illegal EOF This must fail with an error output by the admission webhook redb.admisison.redislabs that is being denied because it can't get the login credentials for the Redis Enterprise Cluster as none were specified. Error from server: error when creating "STDIN": admission webhook "redb.admission.redislabs" denied the request: eviction_policy: u'illegal' is not one of [u'volatile-lru', u'volatile-ttl', u'volatile-random', u'allkeys-lru', u'allkeys-random', u'noeviction', u'volatile-lfu', u'allkeys-lfu'] Note: procedure to enable admission is documented with further detail here.
Redis Enterprise Database custom resource - RedisEnterpriseDatabase Create a RedisEnterpriseDatabase (REDB) by using Custom Resource. The Redis Enterprise Operator can be instructed to manage databases on the Redis Enterprise Cluster using the REDB custom resource. Example: cat << EOF > /tmp/redis-enterprise-database.yml apiVersion: app.redislabs.com/v1alpha1 kind: RedisEnterpriseDatabase metadata: name: redis-enterprise-database spec: memorySize: 100MB EOF kubectl apply -f /tmp/redis-enterprise-database.yml Replace the name of the cluster with the one used on the current namespace. All REDB configuration options are documented here.
The "OpenShift" installation deploys the operator from the current release with the RHEL image from DockerHub and default OpenShift settings. This is the fastest way to get up and running with a new cluster on OpenShift 3.x. For OpenShift 4.x, you may choose to use OLM deployment from within your OpenShift cluster or follow the steps below. Other custom configurations are referenced in this repository. If you are running on OpenShift 3.x, use the bundle.yaml file located under openshift_3_x folder (see comment in step 4). That folder also contains the custom resource definitions compatible with OpenShift 3.x. Note: you will need to replace <my-project> with your project name.
Create a new project: oc new-project my-project
Perform the following commands (you need cluster admin permissions for your Kubernetes cluster): oc apply -f openshift/scc.yaml You should receive the following response: securitycontextconstraints.security.openshift.io "redis-enterprise-scc" configured
Provide the operator permissions for pods (substitute your project for "my-project"): oc adm policy add-scc-to-user redis-enterprise-scc system:serviceaccount:my-project:redis-enterprise-operator oc adm policy add-scc-to-user redis-enterprise-scc system:serviceaccount:my-project:rec Note - change rec in the suffix of the 2nd command with the name of the RedisEnterpriseCluster, if different (see step "Redis Enterprise Cluster custom resource" below).
Deploy the OpenShift operator bundle: NOTE: Update the storageClassName setting in openshift.bundle.yaml (by default its set to gp2). oc apply -f openshift.bundle.yaml Note: If you are running on OpenShift 3.x, use the bundle.yaml file located under openshift_3_x folder.
Redis Enterprise Cluster custom resource - RedisEnterpriseCluster Apply the RedisEnterpriseCluster resource with RHEL7 based images: oc apply -f openshift/rec_rhel.yaml
Redis Enterprise Database (REDB) Admission Controller: The Admission Controlller is recommended for use. It uses the Redis Enterprise Cluster to dynamically validate that REDB resources as configured by the operator are valid. Steps to configure the Admission Controller: Wait for the secret to be created by the operator bundle deployment kubectl get secret admission-tls NAME TYPE DATA AGE admission-tls Opaque 2 2m43s Enable the Kubernetes webhook using the generated certificate # save cert CERT=`kubectl get secret admission-tls -o jsonpath='{.data.cert}'` sed 's/NAMESPACE_OF_SERVICE_ACCOUNT/demo/g' admission/webhook.yaml | kubectl create -f - # create patch file cat > modified-webhook.yaml <<EOF webhooks: - admissionReviewVersions: clientConfig: caBundle: $CERT name: redb.admission.redislabs admissionReviewVersions: ["v1beta1"] EOF # patch webhook with caBundle oc patch ValidatingWebhookConfiguration redb-admission --patch "$(cat modified-webhook.yaml)" Note: If you're not using multiple namespaces you may skip to "Verify the installation" step. Limiting the webhook to the relevant namespaces: Unless limited, webhooks will intercept requests from all namespaces. In case you have several REC objects on your K8S cluster you need to limit the webhook to the relevant namespace. This is done by adding a namespaceSelector to the webhook spec that targets a label found on the namespace. First, make sure you have such a relevant label on the namespace and that it is unique for this namespace. e.g. apiVersion: v1 kind: Namespace metadata: labels: namespace-name: staging name: staging Then, patch the webhook with a namespaceSelector. See this example: cat > modified-webhook.yaml <<EOF webhooks: - name: redb.admission.redislabs namespaceSelector: matchLabels: namespace-name: staging EOF apply the patch: oc patch ValidatingWebhookConfiguration redb-admission --patch "$(cat modified-webhook.yaml)" Verify the installation In order to verify that the all the components of the Admission Controller are installed correctly, we will try to apply an invalid resource that should force the admission controller to reject it. If it applies succesfully, it means the admission controller has not been hooked up correctly. $ oc apply -f - << EOF apiVersion: app.redislabs.com/v1alpha1 kind: RedisEnterpriseDatabase metadata: name: redis-enterprise-database spec: evictionPolicy: illegal EOF This must fail with an error output by the admission webhook redb.admisison.redislabs that is being denied because it can't get the login credentials for the Redis Enterprise Cluster as none were specified. Error from server: error when creating "STDIN": admission webhook "redb.admission.redislabs" denied the request: eviction_policy: u'illegal' is not one of [u'volatile-lru', u'volatile-ttl', u'volatile-random', u'allkeys-lru', u'allkeys-random', u'noeviction', u'volatile-lfu', u'allkeys-lfu'] Note: procedure to enable admission is documented with further detail [here](admission/README.md
Redis Enterprise Database custom resource - RedisEnterpriseDatabase Create a RedisEnterpriseDatabase (REDB) by using Custom Resource. The Redis Enterprise Operator can be instructed to manage databases on the Redis Enterprise Cluster using the REDB custom resource. Example: cat << EOF > /tmp/redis-enterprise-database.yml apiVersion: app.redislabs.com/v1alpha1 kind: RedisEnterpriseDatabase metadata: name: redis-enterprise-database spec: memorySize: 100MB EOF oc apply -f /tmp/redis-enterprise-database.yml Replace the name of the cluster with the one used on the current namespace. All REDB configuration options are documented here.
Instruction on how to deploy the Operator on PKS can be found on the Redis Labs documentation Website.
The Operator automates and simplifies the upgrade process. The Redis Enterprise Cluster Software, and the Redis Enterprise Operator for Kubernetes versions are tightly coupled and should be upgraded together. It is recommended to use the bundle.yaml to upgrade, as it loads all the relevant CRD documents for this version. If the updated CRDs are not loaded, the operator might fail. There are two ways to upgrade - either set 'autoUpgradeRedisEnterprise' within the Redis Enterprise Cluster Spec to instruct the operator to automatically upgrade to the compatible version, or specify the correct Redis Enterprise image manually using the versionTag attribute. The Redis Enterprise Version compatible with this release is 6.2.8-64.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page