docker-exec | Run multiple commands in a docker container | Continuous Deployment library
kandi X-RAY | docker-exec Summary
kandi X-RAY | docker-exec Summary
Run multiple commands in a docker container.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of docker-exec
docker-exec Key Features
docker-exec Examples and Code Snippets
Community Discussions
Trending Discussions on docker-exec
QUESTION
I'm using Firebase to host my personal website and wanted to integrate CircleCI for faster integration. However I receive this error on the step for deployment:
Note Adding sudo before the deploy command causes the build to fail also
...ANSWER
Answered 2020-Oct-22 at 11:26I think the problem is that you run all your npm
commands with sudo
except the firebase deploy
command.
You should definitely run everything with the current user and not the superuser.
You will see in official tutorials that nothing is run with sudo
except for very specific cases.
Also, instead of doing this ./node_modules/.bin/firebase deploy
you could use npx run firebase deploy
which first look in the local node_modules
then in the global ones.
QUESTION
I have a self-hosted gitlab for still private projects and a dedicated physical node for testing with an AMD GPU. On this node there is already a gitlab-ci runner with docker executor.
Is there a way to execute programms with OpenCL and access to the AMD GPU within the docker-containers, which are created by the gitlab-ci runner?
All I found until now, were Nvidia and CUDA related infos to solve this problem (for example this How can I get use cuda inside a gitlab-ci docker executor), but I haven't found anything useful for the case with OpenCL and AMD.
...ANSWER
Answered 2020-Aug-26 at 17:27Found the solution by myself in the meantime. It was easier then expected.
The docker-image for the gitlab-ci pipeline only need the amd gpu driver from the amd website (https://www.amd.com/en/support).
Example-Dockerfile to build the docker images:
QUESTION
I have dotnet solution which consists of console project, webapi project and mysql db. I put them in separate docker images, wrote docker-compose to start and tested on my machine. Next, I wrote a test using FluentDocker, which allows me to start docker-compose programmatically and verify that containers are up and running.
Now I want to do this on Gitlab CI. Previously I used image: mcr.microsoft.com/dotnet/core/sdk:3.1 and run test stage against test project. It worked fine because there was no docker integration. I cant run FluentDocker test on Gitlab CI because the image does not contain docker. So I started researching.
Solution to incorporate db in CI job is here https://docs.gitlab.com/ee/ci/docker/using_docker_images.html#what-is-a-service , but I doubt that I can use docker as service.
Next is using docker intergration for gitlab runner https://docs.gitlab.com/ee/ci/docker/using_docker_build.html#use-docker-in-docker-workflow-with-docker-executor or https://tomgregory.com/running-docker-in-docker-on-windows/ I cant use that because I use free runner from Gitlab itself and cant configure it. I tried to run docker info command, it fails in my script.
I thought of building my own image based on dotnet sdk with docker included, but it seems like a bad idea. And I did not make it working in the end.
Solution that seems working is to use dind and start docker-compose: How to run docker-compose inside docker in docker which runs inside gitlab-runner container? or https://gitlab.com/gitlab-org/gitlab-foss/-/issues/30426 But to use it I will need to install dotnet-sdk to build and test my app in before-script section, and sdk is not a small package to download each time.
I can try to build my image based on docker:dind and include dotnet sdk there, publish it on dockerhub then use it in gitlab runner. Now it seems to me as the last option.
So, whats the correct approach here?
-----------edit--------------
I made this working! See the very thorough answer from Konrad Botor with dockerfile and yml file. I built my own image with sdk and docker and used it for test stage with dind service linked. My image is hosted on dockerhub, so gitlab downloads it for usage.
Also some notes:
1 - how to use dind as service https://gitlab.com/gitlab-org/gitlab-runner/-/issues/25344
2 - where to get modprobe.sh and docker-entrypoint.sh https://github.com/docker-library/docker (go inside latest release). Very important is to clone repo and copy files from there, because I tried to copy-paste contents and it did not work.
3 - docker-compose repo https://github.com/tmaier/docker-compose/blob/master/Dockerfile
4 - dind example https://gitlab.com/gitlab-examples/docker/-/blob/master/.gitlab-ci.yml
...ANSWER
Answered 2020-Aug-08 at 11:40We have the same situation in our comapny but with a self hosted docker / gitlab etc.
Currently we are running docker integration in gitlab runner (your number 2) and had a bunch of bad side-effects e.g.: that the container created by the gitlab runner isnt cleaned up properly because gitlab-runner doesnt think about cleaning up, because in normal cases with the closing of the docker-runner itself it should (wihtout) dnd destroying everything.
Option 3 is not a bad thing, its a default and complete normal thing to do: we have a bunch of docker images that includes lib's that are not given withtin the loaded image. You get used to it - so I would prefer that. Is a clean solution which doesnt feel "hacky".
QUESTION
This did work previously!
My deployment step in my pipeline SSH's onto a DO box & pulls the code from a docker registry. As mentioned, this worked previously & this was my deploy
step in my .gitlab-ci.yml
back then which worked fine inspiration from here under Using SSH
:
ANSWER
Answered 2020-Jun-23 at 05:06Ideally, if you can log on to the DO box, you would stop the ssh service, and launch /usr/bin/sshd -de
, in order to establish a debug session on the SSH daemon side, with logs written on stderr (instead of system messages)
But if you cannot, at least try and generate an rsa key without passphrase, for testing. That means you don't need the ssh-agent.
And try a ssh -Tv gitlab@${DEPLOYMENT_SERVER_IP} ls
to see what log is produced there.
Try with a classic PEM format
QUESTION
I'm using gitlab 9.3.3-ce.0 and gitlab runner with docker executor. I want to build images inside this docker-executor. How to do it?
I'm trying to connect to outer docker using this section inside /etc/gitlab-runner/config.toml
:
ANSWER
Answered 2017-Jul-05 at 10:07You need to use the docker in docker service:
QUESTION
Good afternoon I am trying to install Docker on a Red Hat 8 and following the tutorial on the page:
https://www.linuxtechi.com/install-docker-ce-centos-8-rhel-8/
I find this error that I can't find with its solution, and it doesn't let me move forward
...ANSWER
Answered 2019-Dec-16 at 20:34The error is caused by a conflict with docker-cli and a package named Podman.
As the OP comments, uninstalling this particular package resolves the issue via sudo yum uninstall podman
QUESTION
I am trying to configure a bitbucket CI pipeline to run tests.Stripping out the details I have a make file which looks as follows to run some form of integration tests.
...ANSWER
Answered 2019-Dec-24 at 14:41If I understood your question correctly, you want to wait for the server to start before running tests.
Instead of manually sleep
ing, you should use wait-for-it.sh
(or an alternative). See the relevant Docker docs for more information.
For example:
QUESTION
I have a Docker image and a cli tool. I want to create a Snap package that pulls the Docker image and run it on the local Docker. I already have a snapcraft.yaml that installs the cli tool. I'm trying to understand what should I add/edit so I can call Docker actions.
Additionally, I'm trying to understand if in such case the Docker must be installed via Snap or as long as Docker is somehow installed on the machine everything is fine? What happens when there is no Docker installed?
From what I'v found on Snap Docs, I need to add to my snapcraft.yaml the docker interface so it will provide access to the Docker deamon socket, but I can't find any resources how to call Docker commands...
This is my snapcraft.yaml:
...ANSWER
Answered 2019-Aug-01 at 06:55Found the following dockerized-app-snap repository on GitHub which really helped me to create a Snap that run a dockerzied app through the content-interface shared by the docker snap.
Attached my snapcraft.yaml for anyone who trying to do something similar:
QUESTION
What docker exec
does with -ef
flag ? I've tried to find it in the docs
but no luck.
this topic isn't helpful docker exec or docker container exec
...ANSWER
Answered 2019-Jul-25 at 23:07Given that docker exec
does not have the -f
option, f
here is an argument for -e
(--env
), which, in turn, usually accepts a list of KEY=VALUE
pairs - the environment variables to be set/modified for the exec
'd process.
The intersting point is that the documentation neither describes the exact format of the --env
option arguments, nor explains, what would happen if the argument is not in usual KEY=VALUE
form.
Considering that -ef
is a valid argument, it would be logical to assume that it treats f
as a name of an environment variable. Further, as we do not supply the value (no =
after the name), it would be rational to assume that the variable would be either unchanged, or unset. We may check this empirically. Create the container with additional environment variable set:
QUESTION
According to the official gitlab documentation, one way to enable docker build
within ci
pipelines, is to make use of the dind
service (in terms of gitlab-ci
services).
However, as it is always the case with ci jobs running on docker executors, the docker:latest
image is also needed.
Could someone explain:
- what is the difference between the
docker:dind
and thedocker:latest
images? - (most importantly): why are both the service and the docker image needed (e.g. as indicated in this example, linked to from the github documentation) to perform e.g. a
docker build
whithin a ci job? doesn't thedocker:latest
image (within which the job will be executed!) incorporate the docker daemon (and I think thedocker-compose
also), which are the tools necessary for the commands we need (e.g.docker build
,docker push
etc)?
Unless I am wrong, the question more or less becomes:
...Why a docker client and a docker daemon cannot reside in the same docker (enabled) container
ANSWER
Answered 2017-Nov-17 at 14:27The container will contain only things defined in a docker image. You know you can install anything, starting from a base image. But you can also install Docker (deamon and client) in a container, that is to say a Docker IN Docker (dind). So the container will be able to run other containers. That's why gitlab need this.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install docker-exec
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page