docker-swarm | blog series on Docker Swarm example using VirtualBox | Continuous Deployment library
kandi X-RAY | docker-swarm Summary
kandi X-RAY | docker-swarm Summary
This repository is part of a blog post on Docker Swarm examples using VirtualBox, OVH Openstack, Microsoft Azure and Amazon Web Services AWS:. Fill free to fork my code and have a look to my blog series.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of docker-swarm
docker-swarm Key Features
docker-swarm Examples and Code Snippets
Community Discussions
Trending Discussions on docker-swarm
QUESTION
I built one manager node and one worker node with docker-swarm.
A project was created through docker-compose.
I wanted to command the container "A" in the worker node from the manager node.
So I entered the command:
...ANSWER
Answered 2022-Mar-23 at 09:00A lot of docker resources, such as containers and volumes, are not global and can't be accessed from a swarm manager, only from the manager that hosts the resource.
This means that accessing the resource is a two step process - typically interrogating the swarm managers for some swarm resource (such as a service) that gives you information about local resource of interest: (such as a node and container id).
To actually access each node, ssh is the correct way, but docker can be used directly. DOCKER_HOST
, or a -H parameter can pass a ssh uri
QUESTION
I have an async FastApi application with async sqlalchemy, source code (will not provide schemas.py because it is not necessary):
database.py ...ANSWER
Answered 2021-Dec-24 at 08:07I will quote the answer from here, I think it might be useful. All credit to q210.
In our case, the root cause was that ipvs, used by swarm to route packets, have default expiration time for idle connections set to 900 seconds. So if connection had no activity for more than 15 minutes, ipvs broke it. 900 seconds is significantly less than default linux tcp keepalive setting (7200 seconds) used by most of the services that can send keepalive tcp packets to keep connections from going idle.
The same problem is described here moby/moby#31208
To fix this we had to set the following in postgresql.conf:
QUESTION
I'm creating worker_threads in node and I'm collecting them to custom WorkerPool. All workers are unique, because they have unique worker.threadId. My app have ability to terminate specific worker -- I have terminateById() method in WorkerPool.
So if you have one node.js instance -- everything is all right. But if you're trying to use docker-swarm or Kubernetes -- you will have n amount of different WorkerPool instances. So, for example, you have created some workers in one node instance and now you're trying to terminate one -- it means you have some request with threadId(or other unique data to identify worker). For example your load balancer have chosen to use another node instance for this request, in this instance you have no workers.
At first I thought, that I can change unique index for worker to something like userId+ThreadId and then store it in redis for example. But then I haven't found any info about something like Worker.findByThreadID(). Then what can I do in situation, when you have multiple node instances?
UPDATE: I have found some info about sticky sessions in load balancers. That means, that using cookies we can stick specific user to specific node instance, but this in my case this stickiness has to be active until worker is terminated. It can last for days
...ANSWER
Answered 2021-Nov-10 at 16:44So, I have two answers.
- You can use sticky session in your load balancer in order to route specific user request to specific node.js instance.
- You can store workers statuses + node.js instance id in redis or any db etc. When you're getting stopWorker request -- you're getting info from redis about node instance where the worker has been initialised. Then you're using any message broker to notify all node instances, message consists of nodeInstanceId and workerID, every instance checks if it's itself and if so, then go to current WorkerPool and terminate worker by id
QUESTION
Section 1
I am trying to execute "exec" on one of the containers in a stack or service. I have followed the answers provided here execute a command within docker swarm service as well as on the official Docker documentation here https://docs.docker.com/engine/swarm/secrets/ but it seems "docker container list" and its variants ("docker ps") seem not to find any container listed in the "docker service list".
See the example below from https://docs.docker.com/engine/swarm/secrets/ which is meant to use of "exec" on a container of a service.
1)
...ANSWER
Answered 2021-Sep-13 at 08:54Solved (credit Chris Becke)
I needed to try accessing the container of the service from node the container is running.
I credit Chris Becke for this information.
The swarm setup in Section 1 was a multi-node setup while the swarm setup in Section 2 is a single node (manager/worker) setup.
The container in section 1 was dispatched to a worker node other than the one I was trying to give exec commands to.
I was able to successfully run the "exec" command on the service container once I logged into the worker node.
QUESTION
I'm running gitlab-ce on-prem with min.io as a local S3 service. CI/CD caching is working, and basic connectivity with the S3-compatible minio is good. (Versions: gitlab-ce:13.9.2-ce.0
, gitlab-runner:v13.9.0
, and minio/minio:latest
currently c253244b6fb0
.)
Is there additional configuration to differentiate between job-artifacts and pipeline-artifacts and storing them in on-prem S3-compatible object storage?
In my test repo, the "build" stage builds a sparse R package. When I was using local in-gitlab job artifacts, it succeeds and moves on to the "test" and "deploy" stages, no problems. (And that works with S3-stored cache, though that configuration is solely within gitlab-runner
.) Now that I've configured minio as a local S3-compatible object storage for artifacts, though, it fails.
ANSWER
Answered 2021-Jun-14 at 18:30The answer is to bypass the empty-string test; the underlying protocol does not support region-less configuration, nor is there a configuration option to support it.
The trick is able to work because the use of 'endpoint'
causes the 'region'
to be ignored. With that, setting the region to something and forcing the endpoint allows it to work:
QUESTION
I am trying to get a very simple docker-swarm going. If I start the container using docker-compose up -d
I am able to go to locahost and see the 'Hello message'.
Running docker stack deploy -c docker-compose.yml swar
starts the swarm fine.
docker ps
docker service ls
However navigating to localhost, 0.0.0.0, 127.0.0.1 or IP of machine doesn't work, it just timesout with 'This site can't be reached. took too long to respond.' Ive also tried it with another small tutorial from github, which also has same issue. Any ideas what is wrong?
server.js:
...ANSWER
Answered 2021-May-31 at 11:47For anyone ever getting this sort of problem. Problem was the version of docker. Ive removed 20.10.15 and reinstalled a docker 19.03.10. To install custom docker instead of latest follow steps. https://docs.docker.com/engine/install/ubuntu/
locahost doesnt work, but run hostname -I
to get the ip of your machine and paste that in. Will work then.
QUESTION
Set up two servers, one as a manager node and one as a worker node.
The worker node is labeled role_1=true .
The manager node is labeled role_2, role_3.
...ANSWER
Answered 2021-May-30 at 15:10From placement-constraints: "If you specify multiple placement constraints, the service only deploys onto nodes where they are all met."
Your second deployment lists a constraint combination of node_1 and node_3 which is impossible as you have no node that satisfies this.
QUESTION
I have a Jenkins Job DSL job that worked well until about january (it is not used that often). Last week, the job failed with the error message ERROR: java.io.IOException: Failed to persist config.xml
(no Stack trace, just that message). There were no changes to the job since the last successful execution in january.
ANSWER
Answered 2021-May-11 at 10:22Problem was solved by updating jenkins to 2.289
Seems like there war some problem with the combination of the versions before. I will keep you updated if some of the next updates chnages anything.
QUESTION
I am using fabric 2.2 version and working on docker-machine. when i try to create channel by using peer channel create method through CLI, i got this error.
Error: got unexpected status: BAD_REQUEST -- error validating channel creation transaction for new channel 'mychannel', could not successfully apply update to template configuration: error authorizing update: error validating DeltaSet: policy for [Group] /Channel/Application not satisfied: implicit policy evaluation failed 0 sub-policies were satisfied, but this policy requires 1 of the 'Admins' sub-policies to be satisfied
code:
...ANSWER
Answered 2021-Feb-25 at 04:38You might be using incorrect certificates to sign the transaction. Your certificates and the artifacts are not matching. My suggestion is to delete the docker volume and regenerate the certs and artifacts (genesis block and channel transaction)
QUESTION
I'm getting the following error when trying to login using root
and the initial password set using the Install GitLab using Docker swarm mode method. Any suggestions how how to resolve this? The error is a 401 Unauthorized
, but as you can see below the root
does get created with the supplied password file.
ANSWER
Answered 2021-Feb-20 at 12:19I had the same problem, to workaround it I had to unlock the user (it was locked because password was not working):
https://docs.gitlab.com/ee/security/unlock_user.html
Then I reset the root password:
https://docs.gitlab.com/ee/security/reset_user_password.html
And I was able to access the portal.
I didn't understand why this happened but at least I was able to use gitlab by following these steps.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install docker-swarm
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page