le-docker | Logentries/Docker integration examples | Continuous Deployment library
kandi X-RAY | le-docker Summary
kandi X-RAY | le-docker Summary
From the root of the repository:.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of le-docker
le-docker Key Features
le-docker Examples and Code Snippets
Community Discussions
Trending Discussions on le-docker
QUESTION
Dockerfile:
...ANSWER
Answered 2021-Apr-28 at 20:57If the prompt is anything to go by, we are logged in as root
in the minimal reproduction. Thus, we have root privileges and can read and write all files (external link).
QUESTION
I'm trying to set up a docker-compose definition, where I have a mongoDB container, and a nodeJS container that connects to it.
...ANSWER
Answered 2021-Apr-07 at 17:48After some more digging, I managed to figure it out. The issue is that the MONGO_INITDB_ROOT_USERNAME
and MONGO_INITDB_ROOT_PASSWORD
variables simply set the root user's credentials, and the MONGO_INITDB_DATABASE
simply sets the initial database for scripts in /docker-entrypoint-initdb.d
.
By default, the root user is added to the admin
database, so by removing the /sandboxdb
part of the connection string, I was able to have my node app authenticate against the admin
DB as the root user.
While this doesn't quite accomplish what I wanted initially (to create a separate, non-root user for my database, and use that to authenticate), I think this puts me on the right path to using an init script to set up the user accounts I want to have.
QUESTION
Disclaimer: I aware the /var/run/docker.sock
issue is way common and there are lots of posts out there on it (although most if not all can be summed up to adding the running user to the docker permissions group). I tried all the those instructions and it still does not help me, in redhat.
I have two containers, one Ubuntu
and one running Redhat 7.9
.
My problem is specifically not being able to run - in the redhat container only - a call to Docker.Dotnet's ListImages
(fails with permission denied in /var/run/docker.sock
). In the beginning, I was not able to issue any docker command without prefixing it with sudo. I then added the running user to the docker permissions group, and can issue docker commands without sudo.
But Docker.Dotnet ListImages (which is simply a wrapper to docker api's images/json endpoint) still fails with the permission denied error on docker.sock. I tried all recommended here, to no avail.
I thought perhaps I should add the User=root (although this is not present in my Ubuntu service file, and therefore does not make much sense). I then realized that the ubuntu and redhat docker service files differ considerably.
Ubuntu:
...ANSWER
Answered 2021-Apr-04 at 08:16At the end... my problem was that in my Redhat installation, as opposed to my Ubuntu, we had SELinux enabled.
Disabling it finally had curl --unix-socket /run/docker.sock http://docker/images/json
working from within my composer containers.
To disable Selinux: edit (you may need to impersonate as root using sudo su root
) file /etc/selinux/config - replace SELINUX=enforcing
with SELINUX=disabled
Restart the linux server and that's it.
Remark: This may obviously not be an acceptable solution in a production environment. If this is your case, you will need to properly configure SELinux permissions settings. I was simply assigned a task to identify why this problem was happening in one of our dev machines, so disabling it suffices my needs for now.
QUESTION
I have created a Node application for which I want to automate the deployment. Up till now I only automated the building process, i.e. install dependencies, create the docker image and push to the Azure Container Registry. This works perfectly fine and corresponds to the code below which is not commented away. In the current process I still need to
- manually change the image tag in my helmfile configuration and
- manually perform a
helmfile -environment= sync
.
My struggle is in the second part, for I believe the first part is easily implemented when I have a setup for the second.
In the source directory of the repository I have the helmfile.yaml
which could be called immediately after the build. This is what I tried to achieve with the below setup which is commented. My thoughts were to have a container on which helfmile
is already installed, e.g. cablespaghetti/helmfile-docker
, then connect to the K8s cluster using the Azure kubectl
task to do so, followed by executing the helmfile sync
command. This approach failed as I got a Docker exec fail with exit code 1
, probably because the specified container uses the ENTRYPOINT approach, which is not allowed in the Azure Pipeline.
The approach however feels somewhat cumbersome, as if I am missing a much simpler approach to 'simply' perform a helmfile sync
command. How can I get this to work?
ANSWER
Answered 2020-Dec-22 at 07:58Since the helmfile is not preinstalled in the pipeline agent. You can manually install helmfile with a script task. Check below example:
QUESTION
I am quite new to docker technology and still learning and reading through the docs. I have an oracle base image which i would like to use as a parent image to build my own image and then pushing it towards custom docker registry/repository.
The base image already provides a full setup of oracle db. But as next steps, i would like
- download a dump file (e.g. dump url) directly into the docker image (without downloading to local workspace)
- run some sql script
- lastly, import the dump using data pump (impdp)
I tried to follow https://github.com/mpern/oracle-docker, but here you always need to store dump file locally and point it as volume.
Is it possible if i can use curl command to download and directly store in oracle docker container workspace? Afterwards importing it from there
...ANSWER
Answered 2020-Nov-20 at 11:02You can run an interactive bash session inside your container, and check if curl is installed and if it is not installed, then to install Curl. Using an interactive bash session, and then you can download your dump file.
The ports you require will also need to be be published, if the container is connecting outside of Docker and the host machine, using the -p parameter.
An example is below,
docker run -p 80:80 -it (Your image) /bin/bash
More information on this on the Docker run command, and Dockerfile
https://docs.docker.com/engine/reference/commandline/run/ https://docs.docker.com/engine/reference/builder/
QUESTION
I'm trying to optimize build time in my azure devops pipeline, but the npm install
stage in my dockerfile just will not cache. Why?
This is my dockerfile. I've separated copying the package*.json files and npm install into it's own layer before copying the rest of my sources, as this is best practice and should make the npm install layer be cacheable between builds.
...ANSWER
Answered 2020-Nov-02 at 10:02Solved it!
The problem here is the ARG
keyword in the Dockerfile. It will allways change, thus create a layer which cannot be cached and therefore changing the hash for other layers below.
From the Docker docs: https://docs.docker.com/engine/reference/builder/#understand-how-arg-and-from-interact
ARG is the only instruction that may precede FROM in the Dockerfile
QUESTION
I would like to access the Docker API (running on Windows Server). Sadly a TCP connection is not possible in our network (at least for this case).
Here I found a solution to change the port. But I am not sure if changing the protocol is possible?
...ANSWER
Answered 2020-Oct-07 at 08:55From the docs:
The Docker daemon can listen for Docker Engine API requests via three different types of Socket:
unix
,tcp
, andfd
.
... udp
is not an option.
QUESTION
Is there any other ansible-docker module to capture all the containers on the VM even the ones in exited status.
...ANSWER
Answered 2020-Aug-07 at 05:02You need to make use of containers_filter option with a filter status=exited
.
Check this ansible playbook:
QUESTION
I have added a .gitlab-ci.yml
to my private project. One of the steps is to get a role from a private gitlab repo. However this fails with
ANSWER
Answered 2020-Jul-21 at 10:05Running the same command
ansible-galaxy install -r requirements.yml
on my machine runs fine.
That means you have the right public/private key in ~/.ssh/id_rsa
on your machine, and you are executing it locally with your account.
If you copy it in your GitLab step, make sure to check the rights, and possible the passphrase and known_hosts, as in here or in the documentation:
QUESTION
We use a Docker image to run CI builds. The Docker image has a system-installed Ruby. The Docker container has the content of gem env
and bundle env
as indicated in the gist linked files:
ANSWER
Answered 2020-Jun-26 at 16:02The Dockerfile
with which we are setting up ruby & bundler is similar to this one.
The machines where we deploy the bundled gems are RHEL machines and install ruby from the software collections repositories.
It seems like the deployment machines ruby is built with the --enable-shared=yes
flag.
We changed our Dockerfile
to configure the ruby build the same way, ./configure --enable-shared=yes
. That solved our issue.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install le-docker
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page