dockerfiles | OpenIO 's Dockerfiles repository | Continuous Deployment library
kandi X-RAY | dockerfiles Summary
kandi X-RAY | dockerfiles Summary
This repository stores OpenIO's Docker Images.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of dockerfiles
dockerfiles Key Features
dockerfiles Examples and Code Snippets
Community Discussions
Trending Discussions on dockerfiles
QUESTION
In my Laravel project I upgraded to currently latest Laravel 9.3.0 and PHP 8.0.16.
The original version was Laravel 8.64 with PHP 7.4.
I run the project in Docker containers with php:8.0.16-fpm-alpine
image. Previous was php:7.4-fpm-alpine
.
This is my Docker container config in docker-compose.yml
file:
ANSWER
Answered 2022-Mar-03 at 08:28You can use in Dockerfile for solving memory limitation
QUESTION
I have been working with dockerfiles for a while now but today I was working on a small project and somehow on my VScode I typed # then CTRL+SPACE on the first line I got this syntax=docker/dockerfile:experimental
.
I don't understand what this does and can't find documentation about it can somebody explain what's with that weird comment.
...PS I found some people using this so it's not just some random comment generated by vscode I guess.
ANSWER
Answered 2022-Mar-11 at 08:44It's a way to enable new syntax in Dockerfiles when building with BuildKit. It's mentioned in the documentation:
Overriding default frontendsThe new syntax features in Dockerfile are available if you override the default frontend. To override the default frontend, set the first line of the Dockerfile as a comment with a specific frontend image:
QUESTION
I have a few Dockerfiles that are dependant on a "pat" (Personal Access Token) file to be able to access a private nuget feed. I have taken some inspiration from somakdas to get this working.
To run my single Dockerfile I first create a "pat" file containing my token and build with docker build -f Services/User.API/Dockerfile -t userapi:dev --secret id=pat,src=pat .
This works as intended, but my issue is getting this to work using a docker-compose.yml file.
First I took a look at using docker-compose secrets, but it came to my attention that docker-compose secrets are access at runtime, not build-time. https://github.com/docker/compose/issues/6358
So now I'm trying to create a volume containing my pat file but I get cat: /pat: No such file or directory
when the command RUN --mount=type=secret...
is running. This may not be secure but it will only be running locally.
My Dockerfile
...ANSWER
Answered 2022-Feb-10 at 16:39I solved this by attacking the problem in a different way. As the main goal was to get this working locally I created Dockerfile.Local and docker-compose.local.yml. Together with this I created an .env file containing the "pat".
The docker-compose.local.yml passes the "pat" as an argument to the Dockerfile.Local where it's used. I also discarded --mount=type=secret
and set the value to VSS_NUGET_EXTERNAL_FEED_ENDPOINTS
directly.
.env file:
QUESTION
So I've got a application that consists of lets say 4 APIs and a Frontend, all saved on a monorepo. Everything is setup with docker, every Service has its own Dockerfile. The file structure would look something like this:
...ANSWER
Answered 2021-Dec-21 at 17:52is there some way to share this package and/or this whole RUN operation across my services in a way so that I can modify this in one place instead?
Yes, you can achieve this by structuring your project as follows:
QUESTION
I have created through a Dockerfile a Docker image on which Ubuntu 18.04 and ROS Melodic are properly installed. Please note that I have not used the official ROS-Docker image for creating my Docker image.
The Docker container derived from this Docker image is working fine. However, every time I want to work with the container, I need to execute the following commands:
- Terminal window:
docker run -d -it --name container_name docker_image; docker exec -it container_name bash
Then, after I am within the Docker container:
roscore
- Terminal window:
docker exec -it container_name bash
Then, after I am within the Docker container:
rosrun ROS_PackageName PythonScript.py
Please note that through the above-mentioned Docker commands, both terminals operate in the same Docker container.
I find my way to start the PythonScript.py inefficient. Therefore, I would like to ask for the best practice to start "roscore" followed by "rosrun ...." on Docker Container startup.
In some Dockerfiles I see at the end of the file the command ENTRYPOINT
as well as CMD
.
However, I do not know if these commands can help me to make my Docker container execute "roscore" followed by "rosrun" on container startup.
ANSWER
Answered 2021-Dec-12 at 22:32This is actually, in part, what roslaunch is for. It makes it easier to launch multiple nodes and nicer parameter input, but it will also start a roscore
is one is not already running. In your example it would look something like this:
QUESTION
I've been creating a micro-frontend project and the glue (nginx) isn't working as expected.
My projects are structured as such:
...ANSWER
Answered 2021-Dec-08 at 21:09The primary issue is that localhost:
is not accessible between containers. The containers should reference the service names defined in docker-compose.
nginx.conf
becomes:
QUESTION
Hi. I'm trying to come up with a solution to the following problem. I have one host machine running docker which should serve multiple independent web apps. Each app has a frontend (in angular) and backend (in NodeJS). I have nginx running on the same host machine (it has access to the SSL certificates, etc.).
See the diagram below describing what I'm trying to achieve. When a user requests app1.example.com the frontend files should be served. When app1.example.com/api/* is called, nginx passes the requst to the NodeJS backend.
Most of the articles I found online use containerized nginx and a single compose file. I want to have separate compose files for easy and independent updates - I just bump version numbers of web and API image in the .yml file and restart both containers with docker-compose. It won't affect other running containers or the nginx itself.
Core of my problemI'm having issues on how to serve the static sites. One obvious solution would be to include another nginx in each of the web containers (Dockerfile.web in the diagram) and serve the static files from the container that way. I find that a bit wasteful to have more instances of nginx running when there is already one on the host.
Another way I thought about is to mount or copy the static files from the container to the host (for example to /var/www) when the container is started.
Ideal solution would be that if the container is running the static files are accessible to the host nginx. When the container is stopped, static files become inaccessible and nginx can return 404 or a "maintenance page".
Do you think I'm approaching this the right way? I don't want to end up devising some non-standard niche solution but I would still like to have host-running nginx and independent updates with docker-compose for reasons described above.
DockerfilesDockerfile.api
...ANSWER
Answered 2021-Nov-24 at 09:21Thank you David for your suggestion. I settled with a solution using thttpd per-application instead of nginx (inspired by this article).
Resulting image serving my static site is around 5MB which is great. Everything else is still according to my original diagram. APIs are served by NodeJS, static sites by thttpd and on top is one nginx instance doing all the request-routing.
QUESTION
The ingress-nginx
pod I have helm-installed into my EKS cluster is perpetually failing, its logs indicating the application cannot bind to 0.0.0.0:8443
(INADDR_ANY:8443
). I have confirmed that 0.0.0.0:8443
is indeed already bound in the container, but bc I don't yet have root access to the container I've been unable to glean the culprit process/user.
I have created this issue on the kubernetes ingress-nginx project that I'm using, but also wanted to reach out to a wider SO community that might lend insights, solutions and troubleshooting suggestions for how to get past this hurdle.
Being a newcomer to both AWS/EKS and Kubernetes, it is likely that there is some environment configuration error causing this issue. For example, is it possible that this could be caused by a misconfigured AWS-ism such as the VPC (its Subnets or Security Groups)? Thank you in advance for your help!
The linked GitHub issue provides copious details about the Terraform-provisioned EKS environment as well as the Helm-installed deployment of ingress-nginx
. Here are some key details:
- The EKS cluster is configured to only use Fargate workers, and has 3 public and 3 private subnets, all 6 of which are made available to the cluster and each of its Fargate profiles.
- It should also be noted that the cluster is new, and the ingress-nginx pod is the first attempt to deploy anything to the cluster, aside from kube-system items like coredns, which has been configured to run in Fargate. (which required manually removing the default ec2 annotation as described here)
- There are 6 fargate profiles, but only 2 that are currently in use:
coredns
andingress
. These are dedicated to kube-system/kube-dns and ingress-nginx, respectively. Other than the selectors' namespaces and labels, there is nothing "custom" about the profile specification. It has been confirmed that the selectors are working, both for coredns and ingress. I.e. the ingress pods are scheduled to run, but failing. - The reason why
ingress-nginx
is using port 8443 is that I first ran into this Privilege Escalation issue whose workaround requires one to disableallowPrivilegeEscalation
and change ports from privileged to unprivileged ones. I'm invokinghelm install
with the following values:
ANSWER
Answered 2021-Nov-16 at 14:26Posted community wiki answer based on the same topic and this similar issue (both on GitHub page). Feel free to expand it.
The problem is that 8443 is already bound for the webhook. That's why I used 8081 in my suggestion, not 8443. The examples using 8443 here had to also move the webhook, which introduces more complexity to the changes, and can lead to weird issues if you get it wrong.
An example with used 8081 port:
As well as those settings, you'll also need to use the appropriate annotations to run using NLB rather than ELB, so all-up it ends up looking something like
QUESTION
I've been trying to dockerize an angular application with python backend following this. I did all but for the docker-compose file, it says that my app.py is not present in the folder.
my python app is in 'C:\Users\Fra\Desktop\Fra\uni\Tesi\Progetto\backend'
and my angular app is in 'C:\Users\Fra\Desktop\Fra\uni\Tesi\Progetto\frontend'
i'm leaving you the dockerfiles, that should help
('Dockerfile' placed in the backend folder)
...ANSWER
Answered 2021-Nov-09 at 15:31Your WORKDIR
is /Users/Fra/Desktop/Fra/uni/Tesi/Progetto
but your app.py
file - which is required for container startup in CMD
- is located in /Users/Fra/Desktop/Fra/uni/Tesi/Progetto/backend/
folder.
Rewrite your backend Dockerfile to:
QUESTION
I'm trying to get to grips with both vite and docker so I apologise if I've made stupid mistakes.
I'm running into an issue with esbuild inside docker. I'm trying to get a dev setup going, so I want to mount my code in my containers so that changes should be reflected in real time.
Previously I used Dockerfiles which copied /frontend
and /backend
into their respective containers and that worked, I had my web
and api
containers running and happily talking to each other. However, it meant it didn't pick up any code changes so it wasn't suitable for development.
So I've switched to volume mounts in the hope that I can get my dockerized apps to hot reload, but hit this error instead.
Here's my docker-compose.yml
ANSWER
Answered 2021-Oct-17 at 20:19Finally managed to get this working having read and better understood this discussion; https://github.com/vitejs/vite/issues/2671#issuecomment-829535806.
I'm on MacOS but the container is running Linux and the architecture is mismatched when it attempts to use the version of esbuild from my mounted volume. So, I need to rebuild esbuild inside the container. I tried to use the entrypoint script as that thread suggests but that didn't work for me.
What did work was to change the command in my docker-compose.yml
to command: sh -c "npm rebuild esbuild && yarn dev"
.
It's now hot reloading like a dream.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install dockerfiles
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page