dumb-init | A minimal init system for Linux containers | Continuous Deployment library
kandi X-RAY | dumb-init Summary
kandi X-RAY | dumb-init Summary
dumb-init is a simple process supervisor and init system designed to run as PID 1 inside minimal container environments (such as [Docker][docker]). It is deployed as a small, statically-linked binary written in C. Lightweight containers have popularized the idea of running a single process or service without normal init systems like [systemd][systemd] or [sysvinit][sysvinit]. However, omitting an init system often leads to incorrect handling of processes and signals, and can result in problems such as containers which can’t be gracefully stopped, or leaking containers which should have been destroyed. dumb-init enables you to simply prefix your command with dumb-init. It acts as PID 1 and immediately spawns your command as a child process, taking care to properly handle and forward signals as they are received.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Runs the compiler .
- Initialize options .
- Returns a list of the output paths to the build scripts .
- Initialize options .
dumb-init Key Features
dumb-init Examples and Code Snippets
# ...
try:
from wheel.bdist_wheel import bdist_wheel as _bdist_wheel
class bdist_wheel(_bdist_wheel):
def finalize_options(self):
_bdist_wheel.finalize_options(self)
self.root_is_pure = False
except
Collecting matplotlib
Downloading https://files.pythonhosted.org/packages/26/04/8b381d5b166508cc258632b225adbafec49bbe69aa9a4fa1f1b461428313/matplotlib-3.0.3.tar.gz (36.6MB)
Complete output from command python setup.py egg_info:
pip install 'apache-airflow[postgres]==1.10.3'
&& pip install --no-cache-dir $PYTHON_PACKAGES \
&& pip3 install 'pandas<0.21.0' \ # <-------------------- new line
&& apk del build-runtime \
&& apk add --no-cache --virtual build-dependencies $PACKAG
- name: Install nginx
hosts: web
roles:
- j00bar.nginx-container
Version: 2
services:
nginx:
image: centos:7
ports:
- "8000:80"
user: 'nginx'
command: ['/usr/bin/dumb-init', 'nginx', '-c
Community Discussions
Trending Discussions on dumb-init
QUESTION
I am using Airflow and PostgreSQL in Docker.
So I set up a PostgreSQL database on port 5433. Container (384eaa7b6efb). This is where I have my data which I want to fetch with my dag in Airflow.
...docker ps
ANSWER
Answered 2021-Oct-17 at 15:37Change the host to; host.docker.internal
.
This depends on the Os you are using. In order to access the host's network from within a container you will need to use the host's IP address in the docker. Conveniently, on Windows and Max this is resolved using the domain host.docker.internal
from within the container. As specified in docker's documentation:
I want to connect from a container to a service on the hostThe host has a changing IP address (or none if you have no network access). We recommend that you connect to the special DNS name host.docker.internal which resolves to the internal IP address used by the host. This is for development purpose and will not work in a production environment outside of Docker Desktop for Mac.
There is also a workaround for this in linux which has been answered in What is linux equivalent of "host.docker.internal"
QUESTION
Recently i tried to install Gitlab on Ubuntu machine using docker and docker-compose. This was only done for testing so i can later install it on other machine.
However, i have a problem with removing/deleting gitlab containers.
I tried docker-compose down and killing all processes related to gitlab containers but they keep restarting even if i somehow manage to delete images.
This is my docker-compose.yml file
...ANSWER
Answered 2022-Feb-08 at 15:07I found the solution. Problem was that i didn't use
QUESTION
Is it possible to have a mount and a volume in the same container? I have been trying to setup a mount and a volume using different paths but I am having trouble with getting the correct permission sets.
My docker file:
...ANSWER
Answered 2021-Nov-17 at 14:38 ~/logs:/app/logs/:rw
The directory ~/logs
must be granted rw to 1000:1000 (appuser:appgroup) because this is an existing directory on the host.
other:/app/other/:rw
Named volume is created by docker on the host which is owned by root
(except rootless mode). Use VOLUME
to retain the permission set in Dockerfile:
QUESTION
The ingress-nginx
pod I have helm-installed into my EKS cluster is perpetually failing, its logs indicating the application cannot bind to 0.0.0.0:8443
(INADDR_ANY:8443
). I have confirmed that 0.0.0.0:8443
is indeed already bound in the container, but bc I don't yet have root access to the container I've been unable to glean the culprit process/user.
I have created this issue on the kubernetes ingress-nginx project that I'm using, but also wanted to reach out to a wider SO community that might lend insights, solutions and troubleshooting suggestions for how to get past this hurdle.
Being a newcomer to both AWS/EKS and Kubernetes, it is likely that there is some environment configuration error causing this issue. For example, is it possible that this could be caused by a misconfigured AWS-ism such as the VPC (its Subnets or Security Groups)? Thank you in advance for your help!
The linked GitHub issue provides copious details about the Terraform-provisioned EKS environment as well as the Helm-installed deployment of ingress-nginx
. Here are some key details:
- The EKS cluster is configured to only use Fargate workers, and has 3 public and 3 private subnets, all 6 of which are made available to the cluster and each of its Fargate profiles.
- It should also be noted that the cluster is new, and the ingress-nginx pod is the first attempt to deploy anything to the cluster, aside from kube-system items like coredns, which has been configured to run in Fargate. (which required manually removing the default ec2 annotation as described here)
- There are 6 fargate profiles, but only 2 that are currently in use:
coredns
andingress
. These are dedicated to kube-system/kube-dns and ingress-nginx, respectively. Other than the selectors' namespaces and labels, there is nothing "custom" about the profile specification. It has been confirmed that the selectors are working, both for coredns and ingress. I.e. the ingress pods are scheduled to run, but failing. - The reason why
ingress-nginx
is using port 8443 is that I first ran into this Privilege Escalation issue whose workaround requires one to disableallowPrivilegeEscalation
and change ports from privileged to unprivileged ones. I'm invokinghelm install
with the following values:
ANSWER
Answered 2021-Nov-16 at 14:26Posted community wiki answer based on the same topic and this similar issue (both on GitHub page). Feel free to expand it.
The problem is that 8443 is already bound for the webhook. That's why I used 8081 in my suggestion, not 8443. The examples using 8443 here had to also move the webhook, which introduces more complexity to the changes, and can lead to weird issues if you get it wrong.
An example with used 8081 port:
As well as those settings, you'll also need to use the appropriate annotations to run using NLB rather than ELB, so all-up it ends up looking something like
QUESTION
For do not load default DAGS, I edited the files airflow.cfg of the containers: airflow-scheduler_1,airflow-webserver_1 and airflow-worker_1. After editing each of them, i made db reset
. Unfortunately the default dags are always there.
DO you how to do that ?
docker-compose ps
ANSWER
Answered 2021-Nov-14 at 12:23In the docker-compose.yaml file, there's a line
AIRFLOW__CORE__LOAD_EXAMPLES: 'true'
where you should change 'true'
to 'false'
. After that, the default example DAGs will not be loaded.
QUESTION
I am perplexed about the error in my docker-compose config. I am adding redis alongside sidekiq to the existing rails app that uses docker-compose but I am failing hard on attempts to make the containers communicate with each other. I have tried several different options and looked at pretty much every reasonable topic on SO that touches this subject but I keep failing on basically the same thing. Here is my current config (or at least the relevant parts of it) and error:
docker-compose.yml ...ANSWER
Answered 2021-Oct-13 at 20:17You are probably not exposing redis port to the sidekiq service try exposing this in docker compose. This probably may work.
QUESTION
Im trying to setup airflow on my machine using docker and the docker-compose file provided by airflow here : https://airflow.apache.org/docs/apache-airflow/stable/start/docker.html#docker-compose-yaml
...ANSWER
Answered 2021-Oct-12 at 09:35Try to follow all steps on their website including mkdir ./dags ./logs ./plugins echo -e "AIRFLOW_UID=$(id -u)\nAIRFLOW_GID=0" > .env
.
I don't know but it works then, but still unhealthy,
airflow.apache.org/docs/apache-airflow/stable/start/docker.html
QUESTION
As you can see in this Dockerfile, I do pass the PORT number as --build-arg
at buildtime. Now I need to run npx next start -p ${PORT}
:
ANSWER
Answered 2021-Oct-04 at 08:52The format you are using (exec
) won't work. From the docs:
the exec form does not invoke a command shell. This means that normal shell processing does not happen.
Instead, you can execute a shell directly:
QUESTION
I am running Airflow 2.1.2 in localhost using:
docker-compose
...ANSWER
Answered 2021-Sep-08 at 12:44Your problem is not that scheduler is not communicating with webserver directly, but that you use sqlite and Sequential executor. Basically each of your containers has a separate sqlite database and scheduler and webserver communicate via the DB actually.
The warning you see is consequence of that.
In Airflow 2.1.3 you will see additional warnings about using SQLite and sequential executor in UI (though you already have warnings about in the logs).
If you want to use Airflow for anything really serious with separate contaieners and running fast (processing task in parallel), you should use Postgres or MySQL database and LocalExecutor at least. Then you will not see the warnings and Airflow will work much faster (basicaly everything should start working in parallel).
If you want some inspiration on how you can do it (including how it works when everything is separated out into separate containers per service) with DockerCompose you can take a look at our Quick-start: https://airflow.apache.org/docs/apache-airflow/stable/start/docker.html (but this is also only for development - though it uses CeleryExecutor, so if you want to use it for anything more serious that quick-start/take a look at Airflow, then you have to make your own DockerCompose based on it).
For anything really serious - I recommend https://airflow.apache.org/docs/helm-chart/stable/index.html instead.
QUESTION
I have a react/django app that's dockerized. There's 2 stages to the GitLab CI process. Build_Node and Build_Image. Build node just builds the react app and stores it as an artifact. Build image runs docker build
to build the actual docker image, and relies on the node step because it copies the built files into the image.
However, the build process on the image takes a long time if package dependencies have changed (apt or pip), because it has to reinstall everything.
Is there a way to split the docker build job into multiple parts, so that I can say install the apt and pip packages in the dockerfile while build_node is still running, then finish the docker build once that stage is done?
gitlab-ci.yml: ...ANSWER
Answered 2021-Aug-31 at 18:26Sure! Check out the gitlab docs on stages and on building docker images with gitlab-ci.
If you have multiple pipeline steps defined within a stage they will run in parallel. For example, the following pipeline would build the node and image artifacts in parallel and then build the final image using both artifacts.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install dumb-init
You can use dumb-init like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page