kandi X-RAY | zero-downtime Summary
kandi X-RAY | zero-downtime Summary
zero-downtime
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of zero-downtime
zero-downtime Key Features
zero-downtime Examples and Code Snippets
Community Discussions
Trending Discussions on zero-downtime
QUESTION
I have webservice using websockets, and need to implement zero-downtime deployment. Because I don't want drop existing connections on deploy, I've decided to implement blue/green deploy. My actual solution looks like:
- I've created two identical services in portainer, listening on different ports. Every service has set in node environments some identifier, for example
alfa
andbeta
- Both services are hidden behind load balancer, and balancer is periodically checking status of each service. If service responds on specific route (/balancer-keepalive-check) with string "OK", this service is active and balancer can routing to this service. If service is responding with string "STOP", balancer mark this service as inaccessible, but active connections will be preserved
- which service is active and which is stopped is synced over redis. In redis there are keys
lb.service.alfa
andlb.service.beta
which can contains values 1 for active and 0 for inactive. Example of implementation /balancer-keepalive-check route in nestjs:
ANSWER
Answered 2021-Dec-02 at 07:39I've modified my AppController. There are 2 new endpoints now, one for identify which service is running, second for switch value in redis:
QUESTION
As mentioned in this answer: allow for easy updating of a Replica Set as well as the ability to roll back to a previous deployment.
So, kind: Deployment
scales replicasets, which scales Pods, supports zero-downtime updates by creating and destroying replicasets
What is the purpose of HorizontalPodAutoscaler
resource type?
ANSWER
Answered 2021-Oct-03 at 15:20As you write, with a Deployment
it is easy to manually scale an app horizontally, by changing the numer of replicas.
By using a HorizontalPodAutoscaler
, you can automate the horizontal scaling by e.g. configuring some metric thresholds, therefore the name autoscaler.
QUESTION
We're operating Eclipse Hono and would like to perform zero-downtime updates on all components in our cluster.
For authentication between the different Eclipse Hono components we use the Hono Auth Service.
There we configured a shared secret (HONO_AUTH_SVC_SIGNING_SHARED_SECRET
) to be used to for signing the issued tokens.
Consuming services (e.g. Command Router / Mongo DB Device Regsitry) are configured with the same secret.
When changing the shared secret we simultaneously need to restart all instances of the mentioned microservices, which leads to a short downtime. If we would perform a rolling update, the old instances would not validate the issued tokens of instances already running with the new shared secret.
Has anyone the same issue, or knows how to perform a zero-downtime update?
One option to solve our problem would be the possibility to configure next to the HONO_AUTH_VALIDATION_SHARED_SECRET
another secret (HONO_AUTH_VALIDATION_SHARED_SECRET_FALLBACK
) which would be tried if the primary fails.
Like this we could perform a rolling update of all components without downtime.
The usage of a certificate instead of the shared secret has as far as I can see the same restriction.
Thanks Chris
...ANSWER
Answered 2021-May-27 at 09:56I also do not see any option to cycle the shared secret based on the current implementation without incurring any downtime. For this to work, Hono's components would need to support configuration of multiple shared secrets for validation of the tokens, as you correctly pointed out. Maybe you want to open an issue for this with Hono?
QUESTION
Trying to set up a zero-downtime deployment using docker stack deploy, docker swarm one node localhost environment.
After building image demo:latest
, the first deployment using the command docker stack deploy --compose-file docker-compose.yml demo
able to see 4 replicas running and can access nginx default home page on port 8080 on my local machine. Now updating index.html
, building image with the same name and tag running docker stack deplopy command causing below error and changes are not reflected.
Deleting the deployment and recreating will work, but I am trying to see how can updates rolled in without downtime. Please help here.
Error
...ANSWER
Answered 2021-Apr-06 at 22:33TLDR: push your image to a registry after you build it
Docker swarm doesn't really work without a public or private docker registry. Basically all the nodes need to get their images from the same place, and the registry is the mechanism by which that information is shared. There are other ways to get images loaded on each node in the swarm, but it involves executing the same commands on every node one at a time to load in the image, which isn't great.
Alternatively you could use docker configs for your configuration data and not rebuild the image every time. That would work passably well without a registry, and you can swap out the config data with little-no downtime:
QUESTION
I've been struggling to deploy my containers to Docker swarm on Ubuntu server 20.04. I'm trying to use Docker swarm on a single VPS host for zero-downtime deployments.
Running containers with docker-compose everything works.
Now trying to deploy the same docker-compose file to docker swarm.
...ANSWER
Answered 2021-Apr-01 at 12:25That problem was in the hosting provider.
Provider told us that other customers have tried to configure Docker Swarm on their VPS too, but no one has figured out how to get it to work.
The provider didn't allow any kernel modification or anything else on the lower level.
Now we are using another hosting provider and everything works fine.
QUESTION
Quarkus is great but you can't do zero-downtime deployments, or can you?
My experience with Quarkus is very limited to simple RESTful web app. Running it natively as it's own container, no Jetty, not Tomcat, so it runs on its own.
The issue is, without being contained, say inside an application server (like NGINX Unit which provides zero downtime deployments out of the box) deploying Quarkus web apps would be very painful with almost 100% downtime unless you do some clever tricks.
My question here is: Can you have Quarkus-based web app deployments that can be zero-downtime? If yes, how?
...ANSWER
Answered 2021-Mar-25 at 09:29There are no "clever tricks" to zero downtime deployment. There's a simple principle everyone uses (I'm pretty sure Nginx Unit is no different): you front your application with a load balancer. (I heard Nginx is a good one...)
In order to udpate, you:
- keep the old version running and keep the load balancer pointed at it;
- start a new version;
- when it's fully started, redirect the traffic on the load balancer from old version to new version (there are multiple variants of this, you can redirect the traffic all at once or gradually, you can do session draining, etc.);
- when the old version is no longer used, stop it and remove it.
Quarkus is well suited to running in Kubernetes, which provides zero downtime deployments out of the box (using the same principle I described above).
QUESTION
I'm currently trying to perform the first deploy of my app with Dokku. Unfortunately, I have an error:
...ANSWER
Answered 2021-Feb-25 at 12:27OK, the problem was quite silly. The puma gem was in the development group, and therefore was not available in production to launch the server.
QUESTION
Before redeploying the application war, I checked the xd.lck file from one of the environment path:
...ANSWER
Answered 2020-Dec-23 at 13:17Solution is for the Java application to look for the process locking the file then do a kill -15
signal for example to gracefully make the Java handle the signal to be able to close environments:
QUESTION
I have a containerized Node app, which runs on a DigitalOcean server. When I update the app on the server, the app has to go down for a small amount of time. In order to be able to update the app and avoid downtime, I am currently reading on zero-downtime deployment / blue green deployment with the intention of integrating Docker Swarm and Kubernetes as soon as I am more confident in my ability to use them.
But there is something that really confuses me when I imagine my app being replicated across several nodes. In my application, a User can define some Rules. So, for example, every day at 11AM, I want an email to be sent to Bob.
When my application starts, it fetches all CronTriggers from the database and builds CronTrigger objects that live in the app.
...ANSWER
Answered 2020-Apr-27 at 22:15Yes, you fix this by either moving the actual cron execution to a different daemon that you only run one copy of or use some kind of leader election system so that only one of the copies runs them at any given time.
QUESTION
I am trying to configure IIS for a zero-downtime deployment per this blog (green/blue app deployments). I have setup Application Request Routing (3.0) and URL Rewrite, but after setting up the websites and server farm, I see no "Route to Server Farm" option in the Rewrite rules.
This is what I was expecting to find due to the instructions.
I have completed the following steps on IIS 10 (Windows 10) and IIS 8.5 (Server 2012 R2):
- Installed Application Request Routing 3.0 (i have also tried with 2.5 unsuccessfully)
- Setup 2 different IIS sites for my prod (green) and stage (blue) deployments, and confirmed they are working when directly accessing
- Created web farm in IIS and added 2 servers
- When I was trying to setup the URL Rewrite, I expected to see the "Route to Server Farm" action type, but I see only these options:
ANSWER
Answered 2020-Mar-10 at 19:28The issue was I was trying to add a URL Rewrite rule on the site level instead of server level.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install zero-downtime
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page