Popular New Releases in Nginx
caddy
v2.5.0-rc.1
nginx-proxy
1.0.1
ingress-nginx
NGINX Ingress Controller - v1.2.0
docker-slim
Improved container analysis and compose support
tengine
Tengine-2.3.3
Popular Libraries in Nginx
by caddyserver go
39075 Apache-2.0
Fast, multi-platform web server with automatic HTTPS
by nginx-proxy python
15905 MIT
Automated nginx proxy for Docker containers using docker-gen
by nginx c
15574
An official read-only mirror of http://hg.nginx.org/nginx/ which is updated hourly. Pull requests on GitHub cannot be accepted and will be automatically closed. The proper way to submit changes to nginx is via the nginx development mailing list, see http://nginx.org/en/docs/contributing_changes.html
by digitalocean javascript
15473 MIT
⚙️ NGINX config generator on steroids 💉
by kubernetes go
12540 Apache-2.0
NGINX Ingress Controller for Kubernetes
by docker-slim go
12402 NOASSERTION
DockerSlim (docker-slim): Don't change anything in your Docker container image and minify it by up to 30x (and for compiled languages even more) making it secure too! (free and open source)
by trimstray shell
12183 MIT
How to improve NGINX performance, security, and other important things.
by alibaba c
10740 BSD-2-Clause
A distribution of Nginx with some advanced features
by openresty c
10342 NOASSERTION
High Performance Web Platform Based on Nginx and LuaJIT
Trending New libraries in Nginx
by schenkd python
3880 MIT
Nginx UI allows you to access and modify the nginx configurations files without cli.
by jeessy2 go
1517 MIT
简单好用的DDNS。自动更新域名解析到公网IP(支持阿里云、腾讯云dnspod、Cloudflare、华为云)
by blocklistproject javascript
1266 Unlicense
Primary Block Lists
by ADD-SP c
918 BSD-3-Clause
Handy, High performance, ModSecurity compatible Nginx firewall module & 方便、高性能、兼容 ModSecurity 的 Nginx 防火墙模块
by fhsinchy javascript
762 Unlicense
Project codes used in "The Docker Handbook"
by gsquire rust
661 MIT
top for NGINX
by cym1102 html
626 NOASSERTION
Nginx Web page configuration tool. Use web pages to quickly configure Nginx. Nginx网页管理工具,使用网页来快速配置与管理nginx单机与集群
by BadApple9 php
613 LGPL-2.1
Self-hosted speedtest, an extended project of https://github.com/librespeed/speedtest
by Canop rust
606 MIT
A nginx log explorer
Top Authors in Nginx
1
28 Libraries
10896
2
23 Libraries
29692
3
14 Libraries
1380
4
13 Libraries
655
5
12 Libraries
973
6
12 Libraries
2075
7
12 Libraries
212
8
9 Libraries
2601
9
9 Libraries
1075
10
9 Libraries
317
1
28 Libraries
10896
2
23 Libraries
29692
3
14 Libraries
1380
4
13 Libraries
655
5
12 Libraries
973
6
12 Libraries
2075
7
12 Libraries
212
8
9 Libraries
2601
9
9 Libraries
1075
10
9 Libraries
317
Trending Kits in Nginx
No Trending Kits are available at this moment for Nginx
Trending Discussions on Nginx
Unable to build a docker image following Docker Tutorial
Microk8s dashboard using nginx-ingress via http not working (Error: `no matches for kind "Ingress" in version "extensions/v1beta1"`)
Which one should I use: SSR, SPA only or SSG for my Nuxt project?
Bad gateway when building Android React Native
Why URL re-writing is not working when I do not use slash at the end?
Share media between multiple django(VMs) servers
Why do I have to edit /etc/hosts just sometimes when using nginx-ingress controller and resources in my local k8s environment?
How to create index.html using dockerfile?
Wrong PHP version used when installing composer with Alpine's apk command
Docker: COPY failed: file not found in build context (Dockerfile)
QUESTION
Unable to build a docker image following Docker Tutorial
Asked 2022-Apr-04 at 22:25I was following this tutorial on a Macbook to build a sample Docker image but when I tried to run the following command:
1docker build -t getting-started .
2
I got the following error:
1docker build -t getting-started .
2[+] Building 3.2s (15/24)
3 => [internal] load build definition from Dockerfile 0.0s
4 => => transferring dockerfile: 1.05kB 0.0s
5 => [internal] load .dockerignore 0.0s
6 => => transferring context: 34B 0.0s
7 => [internal] load metadata for docker.io/library/nginx:alpine 2.7s
8 => [internal] load metadata for docker.io/library/python:alpine 2.7s
9 => [internal] load metadata for docker.io/library/node:12-alpine 2.7s
10 => [internal] load build context 0.0s
11 => => transferring context: 7.76kB 0.0s
12 => [base 1/4] FROM docker.io/library/python:alpine@sha256:94cfb962c71da780c5f3d34c6e9d1e01702b8be1edd2d450c24aead4774aeefc 0.0s
13 => => resolve docker.io/library/python:alpine@sha256:94cfb962c71da780c5f3d34c6e9d1e01702b8be1edd2d450c24aead4774aeefc 0.0s
14 => CACHED [stage-5 1/3] FROM docker.io/library/nginx:alpine@sha256:686aac2769fd6e7bab67663fd38750c135b72d993d0bb0a942ab02ef647fc9c3 0.0s
15 => CACHED [app-base 1/8] FROM docker.io/library/node:12-alpine@sha256:1ea5900145028957ec0e7b7e590ac677797fa8962ccec4e73188092f7bc14da5 0.0s
16 => CANCELED [app-base 2/8] RUN apk add --no-cache python g++ make 0.5s
17 => CACHED [base 2/4] WORKDIR /app 0.0s
18 => CACHED [base 3/4] COPY requirements.txt . 0.0s
19 => CACHED [base 4/4] RUN pip install -r requirements.txt 0.0s
20 => CACHED [build 1/2] COPY . . 0.0s
21 => ERROR [build 2/2] RUN mkdocs build 0.4s
22------
23 > [build 2/2] RUN mkdocs build:
24#23 0.378 Traceback (most recent call last):
25#23 0.378 File "/usr/local/bin/mkdocs", line 5, in <module>
26#23 0.378 from mkdocs.__main__ import cli
27#23 0.378 File "/usr/local/lib/python3.10/site-packages/mkdocs/__main__.py", line 14, in <module>
28#23 0.378 from mkdocs import config
29#23 0.378 File "/usr/local/lib/python3.10/site-packages/mkdocs/config/__init__.py", line 2, in <module>
30#23 0.378 from mkdocs.config.defaults import DEFAULT_SCHEMA
31#23 0.378 File "/usr/local/lib/python3.10/site-packages/mkdocs/config/defaults.py", line 4, in <module>
32#23 0.378 from mkdocs.config import config_options
33#23 0.378 File "/usr/local/lib/python3.10/site-packages/mkdocs/config/config_options.py", line 5, in <module>
34#23 0.378 from collections import Sequence, namedtuple
35#23 0.378 ImportError: cannot import name 'Sequence' from 'collections' (/usr/local/lib/python3.10/collections/__init__.py)
36------
37executor failed running [/bin/sh -c mkdocs build]: exit code: 1
38
The Dockerfile I used:
1docker build -t getting-started .
2[+] Building 3.2s (15/24)
3 => [internal] load build definition from Dockerfile 0.0s
4 => => transferring dockerfile: 1.05kB 0.0s
5 => [internal] load .dockerignore 0.0s
6 => => transferring context: 34B 0.0s
7 => [internal] load metadata for docker.io/library/nginx:alpine 2.7s
8 => [internal] load metadata for docker.io/library/python:alpine 2.7s
9 => [internal] load metadata for docker.io/library/node:12-alpine 2.7s
10 => [internal] load build context 0.0s
11 => => transferring context: 7.76kB 0.0s
12 => [base 1/4] FROM docker.io/library/python:alpine@sha256:94cfb962c71da780c5f3d34c6e9d1e01702b8be1edd2d450c24aead4774aeefc 0.0s
13 => => resolve docker.io/library/python:alpine@sha256:94cfb962c71da780c5f3d34c6e9d1e01702b8be1edd2d450c24aead4774aeefc 0.0s
14 => CACHED [stage-5 1/3] FROM docker.io/library/nginx:alpine@sha256:686aac2769fd6e7bab67663fd38750c135b72d993d0bb0a942ab02ef647fc9c3 0.0s
15 => CACHED [app-base 1/8] FROM docker.io/library/node:12-alpine@sha256:1ea5900145028957ec0e7b7e590ac677797fa8962ccec4e73188092f7bc14da5 0.0s
16 => CANCELED [app-base 2/8] RUN apk add --no-cache python g++ make 0.5s
17 => CACHED [base 2/4] WORKDIR /app 0.0s
18 => CACHED [base 3/4] COPY requirements.txt . 0.0s
19 => CACHED [base 4/4] RUN pip install -r requirements.txt 0.0s
20 => CACHED [build 1/2] COPY . . 0.0s
21 => ERROR [build 2/2] RUN mkdocs build 0.4s
22------
23 > [build 2/2] RUN mkdocs build:
24#23 0.378 Traceback (most recent call last):
25#23 0.378 File "/usr/local/bin/mkdocs", line 5, in <module>
26#23 0.378 from mkdocs.__main__ import cli
27#23 0.378 File "/usr/local/lib/python3.10/site-packages/mkdocs/__main__.py", line 14, in <module>
28#23 0.378 from mkdocs import config
29#23 0.378 File "/usr/local/lib/python3.10/site-packages/mkdocs/config/__init__.py", line 2, in <module>
30#23 0.378 from mkdocs.config.defaults import DEFAULT_SCHEMA
31#23 0.378 File "/usr/local/lib/python3.10/site-packages/mkdocs/config/defaults.py", line 4, in <module>
32#23 0.378 from mkdocs.config import config_options
33#23 0.378 File "/usr/local/lib/python3.10/site-packages/mkdocs/config/config_options.py", line 5, in <module>
34#23 0.378 from collections import Sequence, namedtuple
35#23 0.378 ImportError: cannot import name 'Sequence' from 'collections' (/usr/local/lib/python3.10/collections/__init__.py)
36------
37executor failed running [/bin/sh -c mkdocs build]: exit code: 1
38# syntax=docker/dockerfile:1
39FROM node:12-alpine
40RUN apk add --no-cache python g++ make
41WORKDIR /app
42COPY . .
43RUN yarn install --production
44CMD ["node", "src/index.js"]
45
The sample app is from: https://github.com/docker/getting-started/tree/master/app
I'm pretty new to Docker and would appreciate if someone could help point out how I can get this working.
Solutions:
It turns out there were two issues here:
I should have run the
docker build -t getting-started .
command from the/app
folder where my newly-created Dockerfile is located. In my test, I ran the command from the root folder where there was a different Dockerfile as @HansKilian pointed out. Once I tried it inside the/app
folder, it worked fine.The problem with the Docker file in the root folder is caused by a Python version mismatch issue, as pointed out by @atline in the answer. Once I made the change as suggested, I could also build an image using that Dockerfile.
Thank you both for your help.
ANSWER
Answered 2021-Oct-06 at 13:31See its Dockerfile, it uses FROM python:alpine AS base
, which means it used a shared tag. Another word, at the time the document wrote, python:alpine
means maybe python:3.9-alpine
or others.
But now, it means python:3.10-alpine
, see this.
The problems happens at mkdocs
itself, it uses next code:
1docker build -t getting-started .
2[+] Building 3.2s (15/24)
3 => [internal] load build definition from Dockerfile 0.0s
4 => => transferring dockerfile: 1.05kB 0.0s
5 => [internal] load .dockerignore 0.0s
6 => => transferring context: 34B 0.0s
7 => [internal] load metadata for docker.io/library/nginx:alpine 2.7s
8 => [internal] load metadata for docker.io/library/python:alpine 2.7s
9 => [internal] load metadata for docker.io/library/node:12-alpine 2.7s
10 => [internal] load build context 0.0s
11 => => transferring context: 7.76kB 0.0s
12 => [base 1/4] FROM docker.io/library/python:alpine@sha256:94cfb962c71da780c5f3d34c6e9d1e01702b8be1edd2d450c24aead4774aeefc 0.0s
13 => => resolve docker.io/library/python:alpine@sha256:94cfb962c71da780c5f3d34c6e9d1e01702b8be1edd2d450c24aead4774aeefc 0.0s
14 => CACHED [stage-5 1/3] FROM docker.io/library/nginx:alpine@sha256:686aac2769fd6e7bab67663fd38750c135b72d993d0bb0a942ab02ef647fc9c3 0.0s
15 => CACHED [app-base 1/8] FROM docker.io/library/node:12-alpine@sha256:1ea5900145028957ec0e7b7e590ac677797fa8962ccec4e73188092f7bc14da5 0.0s
16 => CANCELED [app-base 2/8] RUN apk add --no-cache python g++ make 0.5s
17 => CACHED [base 2/4] WORKDIR /app 0.0s
18 => CACHED [base 3/4] COPY requirements.txt . 0.0s
19 => CACHED [base 4/4] RUN pip install -r requirements.txt 0.0s
20 => CACHED [build 1/2] COPY . . 0.0s
21 => ERROR [build 2/2] RUN mkdocs build 0.4s
22------
23 > [build 2/2] RUN mkdocs build:
24#23 0.378 Traceback (most recent call last):
25#23 0.378 File "/usr/local/bin/mkdocs", line 5, in <module>
26#23 0.378 from mkdocs.__main__ import cli
27#23 0.378 File "/usr/local/lib/python3.10/site-packages/mkdocs/__main__.py", line 14, in <module>
28#23 0.378 from mkdocs import config
29#23 0.378 File "/usr/local/lib/python3.10/site-packages/mkdocs/config/__init__.py", line 2, in <module>
30#23 0.378 from mkdocs.config.defaults import DEFAULT_SCHEMA
31#23 0.378 File "/usr/local/lib/python3.10/site-packages/mkdocs/config/defaults.py", line 4, in <module>
32#23 0.378 from mkdocs.config import config_options
33#23 0.378 File "/usr/local/lib/python3.10/site-packages/mkdocs/config/config_options.py", line 5, in <module>
34#23 0.378 from collections import Sequence, namedtuple
35#23 0.378 ImportError: cannot import name 'Sequence' from 'collections' (/usr/local/lib/python3.10/collections/__init__.py)
36------
37executor failed running [/bin/sh -c mkdocs build]: exit code: 1
38# syntax=docker/dockerfile:1
39FROM node:12-alpine
40RUN apk add --no-cache python g++ make
41WORKDIR /app
42COPY . .
43RUN yarn install --production
44CMD ["node", "src/index.js"]
45from collections import Sequence, namedtuple
46
But, if you have a import for above in a python3.9 environment, you will see next, which tell you it will stop working from python3.10:
1docker build -t getting-started .
2[+] Building 3.2s (15/24)
3 => [internal] load build definition from Dockerfile 0.0s
4 => => transferring dockerfile: 1.05kB 0.0s
5 => [internal] load .dockerignore 0.0s
6 => => transferring context: 34B 0.0s
7 => [internal] load metadata for docker.io/library/nginx:alpine 2.7s
8 => [internal] load metadata for docker.io/library/python:alpine 2.7s
9 => [internal] load metadata for docker.io/library/node:12-alpine 2.7s
10 => [internal] load build context 0.0s
11 => => transferring context: 7.76kB 0.0s
12 => [base 1/4] FROM docker.io/library/python:alpine@sha256:94cfb962c71da780c5f3d34c6e9d1e01702b8be1edd2d450c24aead4774aeefc 0.0s
13 => => resolve docker.io/library/python:alpine@sha256:94cfb962c71da780c5f3d34c6e9d1e01702b8be1edd2d450c24aead4774aeefc 0.0s
14 => CACHED [stage-5 1/3] FROM docker.io/library/nginx:alpine@sha256:686aac2769fd6e7bab67663fd38750c135b72d993d0bb0a942ab02ef647fc9c3 0.0s
15 => CACHED [app-base 1/8] FROM docker.io/library/node:12-alpine@sha256:1ea5900145028957ec0e7b7e590ac677797fa8962ccec4e73188092f7bc14da5 0.0s
16 => CANCELED [app-base 2/8] RUN apk add --no-cache python g++ make 0.5s
17 => CACHED [base 2/4] WORKDIR /app 0.0s
18 => CACHED [base 3/4] COPY requirements.txt . 0.0s
19 => CACHED [base 4/4] RUN pip install -r requirements.txt 0.0s
20 => CACHED [build 1/2] COPY . . 0.0s
21 => ERROR [build 2/2] RUN mkdocs build 0.4s
22------
23 > [build 2/2] RUN mkdocs build:
24#23 0.378 Traceback (most recent call last):
25#23 0.378 File "/usr/local/bin/mkdocs", line 5, in <module>
26#23 0.378 from mkdocs.__main__ import cli
27#23 0.378 File "/usr/local/lib/python3.10/site-packages/mkdocs/__main__.py", line 14, in <module>
28#23 0.378 from mkdocs import config
29#23 0.378 File "/usr/local/lib/python3.10/site-packages/mkdocs/config/__init__.py", line 2, in <module>
30#23 0.378 from mkdocs.config.defaults import DEFAULT_SCHEMA
31#23 0.378 File "/usr/local/lib/python3.10/site-packages/mkdocs/config/defaults.py", line 4, in <module>
32#23 0.378 from mkdocs.config import config_options
33#23 0.378 File "/usr/local/lib/python3.10/site-packages/mkdocs/config/config_options.py", line 5, in <module>
34#23 0.378 from collections import Sequence, namedtuple
35#23 0.378 ImportError: cannot import name 'Sequence' from 'collections' (/usr/local/lib/python3.10/collections/__init__.py)
36------
37executor failed running [/bin/sh -c mkdocs build]: exit code: 1
38# syntax=docker/dockerfile:1
39FROM node:12-alpine
40RUN apk add --no-cache python g++ make
41WORKDIR /app
42COPY . .
43RUN yarn install --production
44CMD ["node", "src/index.js"]
45from collections import Sequence, namedtuple
46$ docker run --rm -it python:3.9-alpine /bin/sh
47/ # python
48Python 3.9.7 (default, Aug 31 2021, 19:01:35)
49[GCC 10.3.1 20210424] on linux
50Type "help", "copyright", "credits" or "license" for more information.
51>>> from collections import Sequence
52<stdin>:1: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated since Python 3.3, and in 3.10 it will stop working
53
So, for you, to make the guide works again for you, you need to change FROM python:alpine AS base
to:
FROM python:3.9-alpine AS base
QUESTION
Microk8s dashboard using nginx-ingress via http not working (Error: `no matches for kind "Ingress" in version "extensions/v1beta1"`)
Asked 2022-Apr-01 at 07:26I have microk8s v1.22.2 running on Ubuntu 20.04.3 LTS.
Output from /etc/hosts
:
1127.0.0.1 localhost
2127.0.1.1 main
3
Excerpt from microk8s status
:
1127.0.0.1 localhost
2127.0.1.1 main
3addons:
4 enabled:
5 dashboard # The Kubernetes dashboard
6 ha-cluster # Configure high availability on the current node
7 ingress # Ingress controller for external access
8 metrics-server # K8s Metrics Server for API access to service metrics
9
I checked for the running dashboard (kubectl get all --all-namespaces
):
1127.0.0.1 localhost
2127.0.1.1 main
3addons:
4 enabled:
5 dashboard # The Kubernetes dashboard
6 ha-cluster # Configure high availability on the current node
7 ingress # Ingress controller for external access
8 metrics-server # K8s Metrics Server for API access to service metrics
9NAMESPACE NAME READY STATUS RESTARTS AGE
10kube-system pod/calico-node-2jltr 1/1 Running 0 23m
11kube-system pod/calico-kube-controllers-f744bf684-d77hv 1/1 Running 0 23m
12kube-system pod/metrics-server-85df567dd8-jd6gj 1/1 Running 0 22m
13kube-system pod/kubernetes-dashboard-59699458b-pb5jb 1/1 Running 0 21m
14kube-system pod/dashboard-metrics-scraper-58d4977855-94nsp 1/1 Running 0 21m
15ingress pod/nginx-ingress-microk8s-controller-qf5pm 1/1 Running 0 21m
16
17NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
18default service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 23m
19kube-system service/metrics-server ClusterIP 10.152.183.81 <none> 443/TCP 22m
20kube-system service/kubernetes-dashboard ClusterIP 10.152.183.103 <none> 443/TCP 22m
21kube-system service/dashboard-metrics-scraper ClusterIP 10.152.183.197 <none> 8000/TCP 22m
22
23NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
24kube-system daemonset.apps/calico-node 1 1 1 1 1 kubernetes.io/os=linux 23m
25ingress daemonset.apps/nginx-ingress-microk8s-controller 1 1 1 1 1 <none> 22m
26
27NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
28kube-system deployment.apps/calico-kube-controllers 1/1 1 1 23m
29kube-system deployment.apps/metrics-server 1/1 1 1 22m
30kube-system deployment.apps/kubernetes-dashboard 1/1 1 1 22m
31kube-system deployment.apps/dashboard-metrics-scraper 1/1 1 1 22m
32
33NAMESPACE NAME DESIRED CURRENT READY AGE
34kube-system replicaset.apps/calico-kube-controllers-69d7f794d9 0 0 0 23m
35kube-system replicaset.apps/calico-kube-controllers-f744bf684 1 1 1 23m
36kube-system replicaset.apps/metrics-server-85df567dd8 1 1 1 22m
37kube-system replicaset.apps/kubernetes-dashboard-59699458b 1 1 1 21m
38kube-system replicaset.apps/dashboard-metrics-scraper-58d4977855 1 1 1 21m
39
I want to expose the microk8s dashboard within my local network to access it through http://main/dashboard/
To do so, I did the following nano ingress.yaml
:
1127.0.0.1 localhost
2127.0.1.1 main
3addons:
4 enabled:
5 dashboard # The Kubernetes dashboard
6 ha-cluster # Configure high availability on the current node
7 ingress # Ingress controller for external access
8 metrics-server # K8s Metrics Server for API access to service metrics
9NAMESPACE NAME READY STATUS RESTARTS AGE
10kube-system pod/calico-node-2jltr 1/1 Running 0 23m
11kube-system pod/calico-kube-controllers-f744bf684-d77hv 1/1 Running 0 23m
12kube-system pod/metrics-server-85df567dd8-jd6gj 1/1 Running 0 22m
13kube-system pod/kubernetes-dashboard-59699458b-pb5jb 1/1 Running 0 21m
14kube-system pod/dashboard-metrics-scraper-58d4977855-94nsp 1/1 Running 0 21m
15ingress pod/nginx-ingress-microk8s-controller-qf5pm 1/1 Running 0 21m
16
17NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
18default service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 23m
19kube-system service/metrics-server ClusterIP 10.152.183.81 <none> 443/TCP 22m
20kube-system service/kubernetes-dashboard ClusterIP 10.152.183.103 <none> 443/TCP 22m
21kube-system service/dashboard-metrics-scraper ClusterIP 10.152.183.197 <none> 8000/TCP 22m
22
23NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
24kube-system daemonset.apps/calico-node 1 1 1 1 1 kubernetes.io/os=linux 23m
25ingress daemonset.apps/nginx-ingress-microk8s-controller 1 1 1 1 1 <none> 22m
26
27NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
28kube-system deployment.apps/calico-kube-controllers 1/1 1 1 23m
29kube-system deployment.apps/metrics-server 1/1 1 1 22m
30kube-system deployment.apps/kubernetes-dashboard 1/1 1 1 22m
31kube-system deployment.apps/dashboard-metrics-scraper 1/1 1 1 22m
32
33NAMESPACE NAME DESIRED CURRENT READY AGE
34kube-system replicaset.apps/calico-kube-controllers-69d7f794d9 0 0 0 23m
35kube-system replicaset.apps/calico-kube-controllers-f744bf684 1 1 1 23m
36kube-system replicaset.apps/metrics-server-85df567dd8 1 1 1 22m
37kube-system replicaset.apps/kubernetes-dashboard-59699458b 1 1 1 21m
38kube-system replicaset.apps/dashboard-metrics-scraper-58d4977855 1 1 1 21m
39apiVersion: extensions/v1beta1
40kind: Ingress
41metadata:
42 annotations:
43 kubernetes.io/ingress.class: public
44 nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
45 name: dashboard
46 namespace: kube-system
47spec:
48 rules:
49 - host: main
50 http:
51 paths:
52 - backend:
53 serviceName: kubernetes-dashboard
54 servicePort: 443
55 path: /
56
Enabling the ingress-config through kubectl apply -f ingress.yaml
gave the following error:
1127.0.0.1 localhost
2127.0.1.1 main
3addons:
4 enabled:
5 dashboard # The Kubernetes dashboard
6 ha-cluster # Configure high availability on the current node
7 ingress # Ingress controller for external access
8 metrics-server # K8s Metrics Server for API access to service metrics
9NAMESPACE NAME READY STATUS RESTARTS AGE
10kube-system pod/calico-node-2jltr 1/1 Running 0 23m
11kube-system pod/calico-kube-controllers-f744bf684-d77hv 1/1 Running 0 23m
12kube-system pod/metrics-server-85df567dd8-jd6gj 1/1 Running 0 22m
13kube-system pod/kubernetes-dashboard-59699458b-pb5jb 1/1 Running 0 21m
14kube-system pod/dashboard-metrics-scraper-58d4977855-94nsp 1/1 Running 0 21m
15ingress pod/nginx-ingress-microk8s-controller-qf5pm 1/1 Running 0 21m
16
17NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
18default service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 23m
19kube-system service/metrics-server ClusterIP 10.152.183.81 <none> 443/TCP 22m
20kube-system service/kubernetes-dashboard ClusterIP 10.152.183.103 <none> 443/TCP 22m
21kube-system service/dashboard-metrics-scraper ClusterIP 10.152.183.197 <none> 8000/TCP 22m
22
23NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
24kube-system daemonset.apps/calico-node 1 1 1 1 1 kubernetes.io/os=linux 23m
25ingress daemonset.apps/nginx-ingress-microk8s-controller 1 1 1 1 1 <none> 22m
26
27NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
28kube-system deployment.apps/calico-kube-controllers 1/1 1 1 23m
29kube-system deployment.apps/metrics-server 1/1 1 1 22m
30kube-system deployment.apps/kubernetes-dashboard 1/1 1 1 22m
31kube-system deployment.apps/dashboard-metrics-scraper 1/1 1 1 22m
32
33NAMESPACE NAME DESIRED CURRENT READY AGE
34kube-system replicaset.apps/calico-kube-controllers-69d7f794d9 0 0 0 23m
35kube-system replicaset.apps/calico-kube-controllers-f744bf684 1 1 1 23m
36kube-system replicaset.apps/metrics-server-85df567dd8 1 1 1 22m
37kube-system replicaset.apps/kubernetes-dashboard-59699458b 1 1 1 21m
38kube-system replicaset.apps/dashboard-metrics-scraper-58d4977855 1 1 1 21m
39apiVersion: extensions/v1beta1
40kind: Ingress
41metadata:
42 annotations:
43 kubernetes.io/ingress.class: public
44 nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
45 name: dashboard
46 namespace: kube-system
47spec:
48 rules:
49 - host: main
50 http:
51 paths:
52 - backend:
53 serviceName: kubernetes-dashboard
54 servicePort: 443
55 path: /
56error: unable to recognize "ingress.yaml": no matches for kind "Ingress" in version "extensions/v1beta1"
57
Help would be much appreciated, thanks!
Update: @harsh-manvar pointed out a mismatch in the config version. I have rewritten ingress.yaml to a very stripped down version:
1127.0.0.1 localhost
2127.0.1.1 main
3addons:
4 enabled:
5 dashboard # The Kubernetes dashboard
6 ha-cluster # Configure high availability on the current node
7 ingress # Ingress controller for external access
8 metrics-server # K8s Metrics Server for API access to service metrics
9NAMESPACE NAME READY STATUS RESTARTS AGE
10kube-system pod/calico-node-2jltr 1/1 Running 0 23m
11kube-system pod/calico-kube-controllers-f744bf684-d77hv 1/1 Running 0 23m
12kube-system pod/metrics-server-85df567dd8-jd6gj 1/1 Running 0 22m
13kube-system pod/kubernetes-dashboard-59699458b-pb5jb 1/1 Running 0 21m
14kube-system pod/dashboard-metrics-scraper-58d4977855-94nsp 1/1 Running 0 21m
15ingress pod/nginx-ingress-microk8s-controller-qf5pm 1/1 Running 0 21m
16
17NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
18default service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 23m
19kube-system service/metrics-server ClusterIP 10.152.183.81 <none> 443/TCP 22m
20kube-system service/kubernetes-dashboard ClusterIP 10.152.183.103 <none> 443/TCP 22m
21kube-system service/dashboard-metrics-scraper ClusterIP 10.152.183.197 <none> 8000/TCP 22m
22
23NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
24kube-system daemonset.apps/calico-node 1 1 1 1 1 kubernetes.io/os=linux 23m
25ingress daemonset.apps/nginx-ingress-microk8s-controller 1 1 1 1 1 <none> 22m
26
27NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
28kube-system deployment.apps/calico-kube-controllers 1/1 1 1 23m
29kube-system deployment.apps/metrics-server 1/1 1 1 22m
30kube-system deployment.apps/kubernetes-dashboard 1/1 1 1 22m
31kube-system deployment.apps/dashboard-metrics-scraper 1/1 1 1 22m
32
33NAMESPACE NAME DESIRED CURRENT READY AGE
34kube-system replicaset.apps/calico-kube-controllers-69d7f794d9 0 0 0 23m
35kube-system replicaset.apps/calico-kube-controllers-f744bf684 1 1 1 23m
36kube-system replicaset.apps/metrics-server-85df567dd8 1 1 1 22m
37kube-system replicaset.apps/kubernetes-dashboard-59699458b 1 1 1 21m
38kube-system replicaset.apps/dashboard-metrics-scraper-58d4977855 1 1 1 21m
39apiVersion: extensions/v1beta1
40kind: Ingress
41metadata:
42 annotations:
43 kubernetes.io/ingress.class: public
44 nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
45 name: dashboard
46 namespace: kube-system
47spec:
48 rules:
49 - host: main
50 http:
51 paths:
52 - backend:
53 serviceName: kubernetes-dashboard
54 servicePort: 443
55 path: /
56error: unable to recognize "ingress.yaml": no matches for kind "Ingress" in version "extensions/v1beta1"
57apiVersion: networking.k8s.io/v1
58kind: Ingress
59metadata:
60 name: dashboard
61 namespace: kube-system
62spec:
63 rules:
64 - http:
65 paths:
66 - path: /dashboard
67 pathType: Prefix
68 backend:
69 service:
70 name: kubernetes-dashboard
71 port:
72 number: 443
73
Applying this works. Also, the ingress rule gets created.
1127.0.0.1 localhost
2127.0.1.1 main
3addons:
4 enabled:
5 dashboard # The Kubernetes dashboard
6 ha-cluster # Configure high availability on the current node
7 ingress # Ingress controller for external access
8 metrics-server # K8s Metrics Server for API access to service metrics
9NAMESPACE NAME READY STATUS RESTARTS AGE
10kube-system pod/calico-node-2jltr 1/1 Running 0 23m
11kube-system pod/calico-kube-controllers-f744bf684-d77hv 1/1 Running 0 23m
12kube-system pod/metrics-server-85df567dd8-jd6gj 1/1 Running 0 22m
13kube-system pod/kubernetes-dashboard-59699458b-pb5jb 1/1 Running 0 21m
14kube-system pod/dashboard-metrics-scraper-58d4977855-94nsp 1/1 Running 0 21m
15ingress pod/nginx-ingress-microk8s-controller-qf5pm 1/1 Running 0 21m
16
17NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
18default service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 23m
19kube-system service/metrics-server ClusterIP 10.152.183.81 <none> 443/TCP 22m
20kube-system service/kubernetes-dashboard ClusterIP 10.152.183.103 <none> 443/TCP 22m
21kube-system service/dashboard-metrics-scraper ClusterIP 10.152.183.197 <none> 8000/TCP 22m
22
23NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
24kube-system daemonset.apps/calico-node 1 1 1 1 1 kubernetes.io/os=linux 23m
25ingress daemonset.apps/nginx-ingress-microk8s-controller 1 1 1 1 1 <none> 22m
26
27NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
28kube-system deployment.apps/calico-kube-controllers 1/1 1 1 23m
29kube-system deployment.apps/metrics-server 1/1 1 1 22m
30kube-system deployment.apps/kubernetes-dashboard 1/1 1 1 22m
31kube-system deployment.apps/dashboard-metrics-scraper 1/1 1 1 22m
32
33NAMESPACE NAME DESIRED CURRENT READY AGE
34kube-system replicaset.apps/calico-kube-controllers-69d7f794d9 0 0 0 23m
35kube-system replicaset.apps/calico-kube-controllers-f744bf684 1 1 1 23m
36kube-system replicaset.apps/metrics-server-85df567dd8 1 1 1 22m
37kube-system replicaset.apps/kubernetes-dashboard-59699458b 1 1 1 21m
38kube-system replicaset.apps/dashboard-metrics-scraper-58d4977855 1 1 1 21m
39apiVersion: extensions/v1beta1
40kind: Ingress
41metadata:
42 annotations:
43 kubernetes.io/ingress.class: public
44 nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
45 name: dashboard
46 namespace: kube-system
47spec:
48 rules:
49 - host: main
50 http:
51 paths:
52 - backend:
53 serviceName: kubernetes-dashboard
54 servicePort: 443
55 path: /
56error: unable to recognize "ingress.yaml": no matches for kind "Ingress" in version "extensions/v1beta1"
57apiVersion: networking.k8s.io/v1
58kind: Ingress
59metadata:
60 name: dashboard
61 namespace: kube-system
62spec:
63 rules:
64 - http:
65 paths:
66 - path: /dashboard
67 pathType: Prefix
68 backend:
69 service:
70 name: kubernetes-dashboard
71 port:
72 number: 443
73NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
74kube-system dashboard public * 127.0.0.1 80 11m
75
However, when I access the dashboard through http://<ip-of-kubernetes-master>/dashboard
, I get a 400
error.
Log from the ingress controller:
1127.0.0.1 localhost
2127.0.1.1 main
3addons:
4 enabled:
5 dashboard # The Kubernetes dashboard
6 ha-cluster # Configure high availability on the current node
7 ingress # Ingress controller for external access
8 metrics-server # K8s Metrics Server for API access to service metrics
9NAMESPACE NAME READY STATUS RESTARTS AGE
10kube-system pod/calico-node-2jltr 1/1 Running 0 23m
11kube-system pod/calico-kube-controllers-f744bf684-d77hv 1/1 Running 0 23m
12kube-system pod/metrics-server-85df567dd8-jd6gj 1/1 Running 0 22m
13kube-system pod/kubernetes-dashboard-59699458b-pb5jb 1/1 Running 0 21m
14kube-system pod/dashboard-metrics-scraper-58d4977855-94nsp 1/1 Running 0 21m
15ingress pod/nginx-ingress-microk8s-controller-qf5pm 1/1 Running 0 21m
16
17NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
18default service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 23m
19kube-system service/metrics-server ClusterIP 10.152.183.81 <none> 443/TCP 22m
20kube-system service/kubernetes-dashboard ClusterIP 10.152.183.103 <none> 443/TCP 22m
21kube-system service/dashboard-metrics-scraper ClusterIP 10.152.183.197 <none> 8000/TCP 22m
22
23NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
24kube-system daemonset.apps/calico-node 1 1 1 1 1 kubernetes.io/os=linux 23m
25ingress daemonset.apps/nginx-ingress-microk8s-controller 1 1 1 1 1 <none> 22m
26
27NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
28kube-system deployment.apps/calico-kube-controllers 1/1 1 1 23m
29kube-system deployment.apps/metrics-server 1/1 1 1 22m
30kube-system deployment.apps/kubernetes-dashboard 1/1 1 1 22m
31kube-system deployment.apps/dashboard-metrics-scraper 1/1 1 1 22m
32
33NAMESPACE NAME DESIRED CURRENT READY AGE
34kube-system replicaset.apps/calico-kube-controllers-69d7f794d9 0 0 0 23m
35kube-system replicaset.apps/calico-kube-controllers-f744bf684 1 1 1 23m
36kube-system replicaset.apps/metrics-server-85df567dd8 1 1 1 22m
37kube-system replicaset.apps/kubernetes-dashboard-59699458b 1 1 1 21m
38kube-system replicaset.apps/dashboard-metrics-scraper-58d4977855 1 1 1 21m
39apiVersion: extensions/v1beta1
40kind: Ingress
41metadata:
42 annotations:
43 kubernetes.io/ingress.class: public
44 nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
45 name: dashboard
46 namespace: kube-system
47spec:
48 rules:
49 - host: main
50 http:
51 paths:
52 - backend:
53 serviceName: kubernetes-dashboard
54 servicePort: 443
55 path: /
56error: unable to recognize "ingress.yaml": no matches for kind "Ingress" in version "extensions/v1beta1"
57apiVersion: networking.k8s.io/v1
58kind: Ingress
59metadata:
60 name: dashboard
61 namespace: kube-system
62spec:
63 rules:
64 - http:
65 paths:
66 - path: /dashboard
67 pathType: Prefix
68 backend:
69 service:
70 name: kubernetes-dashboard
71 port:
72 number: 443
73NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
74kube-system dashboard public * 127.0.0.1 80 11m
75192.168.0.123 - - [10/Oct/2021:21:38:47 +0000] "GET /dashboard HTTP/1.1" 400 54 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.71 Safari/537.36" 466 0.002 [kube-system-kubernetes-dashboard-443] [] 10.1.76.3:8443 48 0.000 400 ca0946230759edfbaaf9d94f3d5c959a
76
Does the dashboard also need to be exposed using the microk8s proxy
? I thought the ingress controller would take care of this, or did I misunderstand this?
ANSWER
Answered 2021-Oct-10 at 18:291127.0.0.1 localhost
2127.0.1.1 main
3addons:
4 enabled:
5 dashboard # The Kubernetes dashboard
6 ha-cluster # Configure high availability on the current node
7 ingress # Ingress controller for external access
8 metrics-server # K8s Metrics Server for API access to service metrics
9NAMESPACE NAME READY STATUS RESTARTS AGE
10kube-system pod/calico-node-2jltr 1/1 Running 0 23m
11kube-system pod/calico-kube-controllers-f744bf684-d77hv 1/1 Running 0 23m
12kube-system pod/metrics-server-85df567dd8-jd6gj 1/1 Running 0 22m
13kube-system pod/kubernetes-dashboard-59699458b-pb5jb 1/1 Running 0 21m
14kube-system pod/dashboard-metrics-scraper-58d4977855-94nsp 1/1 Running 0 21m
15ingress pod/nginx-ingress-microk8s-controller-qf5pm 1/1 Running 0 21m
16
17NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
18default service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 23m
19kube-system service/metrics-server ClusterIP 10.152.183.81 <none> 443/TCP 22m
20kube-system service/kubernetes-dashboard ClusterIP 10.152.183.103 <none> 443/TCP 22m
21kube-system service/dashboard-metrics-scraper ClusterIP 10.152.183.197 <none> 8000/TCP 22m
22
23NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
24kube-system daemonset.apps/calico-node 1 1 1 1 1 kubernetes.io/os=linux 23m
25ingress daemonset.apps/nginx-ingress-microk8s-controller 1 1 1 1 1 <none> 22m
26
27NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
28kube-system deployment.apps/calico-kube-controllers 1/1 1 1 23m
29kube-system deployment.apps/metrics-server 1/1 1 1 22m
30kube-system deployment.apps/kubernetes-dashboard 1/1 1 1 22m
31kube-system deployment.apps/dashboard-metrics-scraper 1/1 1 1 22m
32
33NAMESPACE NAME DESIRED CURRENT READY AGE
34kube-system replicaset.apps/calico-kube-controllers-69d7f794d9 0 0 0 23m
35kube-system replicaset.apps/calico-kube-controllers-f744bf684 1 1 1 23m
36kube-system replicaset.apps/metrics-server-85df567dd8 1 1 1 22m
37kube-system replicaset.apps/kubernetes-dashboard-59699458b 1 1 1 21m
38kube-system replicaset.apps/dashboard-metrics-scraper-58d4977855 1 1 1 21m
39apiVersion: extensions/v1beta1
40kind: Ingress
41metadata:
42 annotations:
43 kubernetes.io/ingress.class: public
44 nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
45 name: dashboard
46 namespace: kube-system
47spec:
48 rules:
49 - host: main
50 http:
51 paths:
52 - backend:
53 serviceName: kubernetes-dashboard
54 servicePort: 443
55 path: /
56error: unable to recognize "ingress.yaml": no matches for kind "Ingress" in version "extensions/v1beta1"
57apiVersion: networking.k8s.io/v1
58kind: Ingress
59metadata:
60 name: dashboard
61 namespace: kube-system
62spec:
63 rules:
64 - http:
65 paths:
66 - path: /dashboard
67 pathType: Prefix
68 backend:
69 service:
70 name: kubernetes-dashboard
71 port:
72 number: 443
73NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
74kube-system dashboard public * 127.0.0.1 80 11m
75192.168.0.123 - - [10/Oct/2021:21:38:47 +0000] "GET /dashboard HTTP/1.1" 400 54 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.71 Safari/537.36" 466 0.002 [kube-system-kubernetes-dashboard-443] [] 10.1.76.3:8443 48 0.000 400 ca0946230759edfbaaf9d94f3d5c959a
76error: unable to recognize "ingress.yaml": no matches for kind "Ingress" in version "extensions/v1beta1"
77
it' due to the mismatch in the ingress API version.
You are running the v1.22.2 while API version in YAML is old.
Good example : https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/
you are using the older ingress API version in your YAML which is extensions/v1beta1
.
You need to change this based on ingress version and K8s version you are running.
This is for version 1.19 in K8s and will work in 1.22 also
Example :
1127.0.0.1 localhost
2127.0.1.1 main
3addons:
4 enabled:
5 dashboard # The Kubernetes dashboard
6 ha-cluster # Configure high availability on the current node
7 ingress # Ingress controller for external access
8 metrics-server # K8s Metrics Server for API access to service metrics
9NAMESPACE NAME READY STATUS RESTARTS AGE
10kube-system pod/calico-node-2jltr 1/1 Running 0 23m
11kube-system pod/calico-kube-controllers-f744bf684-d77hv 1/1 Running 0 23m
12kube-system pod/metrics-server-85df567dd8-jd6gj 1/1 Running 0 22m
13kube-system pod/kubernetes-dashboard-59699458b-pb5jb 1/1 Running 0 21m
14kube-system pod/dashboard-metrics-scraper-58d4977855-94nsp 1/1 Running 0 21m
15ingress pod/nginx-ingress-microk8s-controller-qf5pm 1/1 Running 0 21m
16
17NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
18default service/kubernetes ClusterIP 10.152.183.1 <none> 443/TCP 23m
19kube-system service/metrics-server ClusterIP 10.152.183.81 <none> 443/TCP 22m
20kube-system service/kubernetes-dashboard ClusterIP 10.152.183.103 <none> 443/TCP 22m
21kube-system service/dashboard-metrics-scraper ClusterIP 10.152.183.197 <none> 8000/TCP 22m
22
23NAMESPACE NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
24kube-system daemonset.apps/calico-node 1 1 1 1 1 kubernetes.io/os=linux 23m
25ingress daemonset.apps/nginx-ingress-microk8s-controller 1 1 1 1 1 <none> 22m
26
27NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
28kube-system deployment.apps/calico-kube-controllers 1/1 1 1 23m
29kube-system deployment.apps/metrics-server 1/1 1 1 22m
30kube-system deployment.apps/kubernetes-dashboard 1/1 1 1 22m
31kube-system deployment.apps/dashboard-metrics-scraper 1/1 1 1 22m
32
33NAMESPACE NAME DESIRED CURRENT READY AGE
34kube-system replicaset.apps/calico-kube-controllers-69d7f794d9 0 0 0 23m
35kube-system replicaset.apps/calico-kube-controllers-f744bf684 1 1 1 23m
36kube-system replicaset.apps/metrics-server-85df567dd8 1 1 1 22m
37kube-system replicaset.apps/kubernetes-dashboard-59699458b 1 1 1 21m
38kube-system replicaset.apps/dashboard-metrics-scraper-58d4977855 1 1 1 21m
39apiVersion: extensions/v1beta1
40kind: Ingress
41metadata:
42 annotations:
43 kubernetes.io/ingress.class: public
44 nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
45 name: dashboard
46 namespace: kube-system
47spec:
48 rules:
49 - host: main
50 http:
51 paths:
52 - backend:
53 serviceName: kubernetes-dashboard
54 servicePort: 443
55 path: /
56error: unable to recognize "ingress.yaml": no matches for kind "Ingress" in version "extensions/v1beta1"
57apiVersion: networking.k8s.io/v1
58kind: Ingress
59metadata:
60 name: dashboard
61 namespace: kube-system
62spec:
63 rules:
64 - http:
65 paths:
66 - path: /dashboard
67 pathType: Prefix
68 backend:
69 service:
70 name: kubernetes-dashboard
71 port:
72 number: 443
73NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE
74kube-system dashboard public * 127.0.0.1 80 11m
75192.168.0.123 - - [10/Oct/2021:21:38:47 +0000] "GET /dashboard HTTP/1.1" 400 54 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.71 Safari/537.36" 466 0.002 [kube-system-kubernetes-dashboard-443] [] 10.1.76.3:8443 48 0.000 400 ca0946230759edfbaaf9d94f3d5c959a
76error: unable to recognize "ingress.yaml": no matches for kind "Ingress" in version "extensions/v1beta1"
77apiVersion: networking.k8s.io/v1
78kind: Ingress
79metadata:
80 name: minimal-ingress
81 annotations:
82 nginx.ingress.kubernetes.io/rewrite-target: /
83spec:
84 rules:
85 - http:
86 paths:
87 - path: /testpath
88 pathType: Prefix
89 backend:
90 service:
91 name: test
92 port:
93 number: 80
94
QUESTION
Which one should I use: SSR, SPA only or SSG for my Nuxt project?
Asked 2022-Mar-28 at 07:12I need to develop a website using laravel and nuxtjs.
To my knowledege, SSR mode is one of the advanced feature of the nuxtjs but it requires to run the nuxt server. In other words, we need to deploy the laravel on the server like nginx and have to run the nuxt server by using npm run start
. If we use SPA mode, nuxt generate static page into dist directory and we can simply merge it to laravel project and everything is done. We don't need to run the extra server.
This is my opinion so far. I am not sure whether or not it is true, so I can't decide which one to choose. First of all, I am not sure which one is really better. Second, I am not sure if SSR mode really requires to run the extra server.
I want to get advice from experts and make a decision. I'd be really grateful if you give me advice about this. Thanks in advance.
ANSWER
Answered 2021-Aug-04 at 16:48I recommend using SSG (target: static
and ssr: true
), this will give you SEO + speed and you will not need any server for this. Hence, hosting it on Netlify would be totally fine and free.
More info can be found here on the various steps: What's the real difference between target: 'static' and target: 'server' in Nuxt 2.14 universal mode?
Also, it all comes down to the drawbacks between SSR and SSG. More info can be found on Google. But if you don't have a first-page protected by a password or some back-office admin-only tool, SSG is usually the way to go.
QUESTION
Bad gateway when building Android React Native
Asked 2022-Mar-25 at 01:15When I run react-native run-android
, I get the following error:
1* What went wrong:
2Could not determine the dependencies of task ':react-native-intercom:generateDebugRFile'.
3> Could not resolve all task dependencies for configuration ':react-native-intercom:debugRuntimeClasspath'.
4 > Could not resolve com.facebook.react:react-native:+.
5 Required by:
6 project :react-native-intercom
7 > Failed to list versions for com.facebook.react:react-native.
8 > Unable to load Maven meta-data from https://dl.bintray.com/intercom/intercom-maven/com/facebook/react/react-native/maven-metadata.xml.
9 > Could not get resource 'https://dl.bintray.com/intercom/intercom-maven/com/facebook/react/react-native/maven-metadata.xml'.
10 > Could not GET 'https://dl.bintray.com/intercom/intercom-maven/com/facebook/react/react-native/maven-metadata.xml'. Received status code 502 from server: Bad Gateway
11
I have seen similar issues others were having and they said to check the status: https://status.bintray.com/ - but it's saying everything is operational. I also grepped my whole codebase for bintray, but there is no reference to it.
Edit: Also tried that URL in the browser and get the same 502
also:
1* What went wrong:
2Could not determine the dependencies of task ':react-native-intercom:generateDebugRFile'.
3> Could not resolve all task dependencies for configuration ':react-native-intercom:debugRuntimeClasspath'.
4 > Could not resolve com.facebook.react:react-native:+.
5 Required by:
6 project :react-native-intercom
7 > Failed to list versions for com.facebook.react:react-native.
8 > Unable to load Maven meta-data from https://dl.bintray.com/intercom/intercom-maven/com/facebook/react/react-native/maven-metadata.xml.
9 > Could not get resource 'https://dl.bintray.com/intercom/intercom-maven/com/facebook/react/react-native/maven-metadata.xml'.
10 > Could not GET 'https://dl.bintray.com/intercom/intercom-maven/com/facebook/react/react-native/maven-metadata.xml'. Received status code 502 from server: Bad Gateway
11dara@dara-beast:~/DAD/rn-app$ curl http://dl.bintray.com/intercom/intercom-maven/com/facebook/react/react-native/maven-metadata.xml
12<html>
13<head><title>502 Bad Gateway</title></head>
14<body bgcolor="white">
15<center><h1>502 Bad Gateway</h1></center>
16<hr><center>nginx</center>
17</body>
18</html>
19
20
21dara@dara-beast:~/DAD/rn-app$ curl https://dl.bintray.com/
22<html>
23<head><title>502 Bad Gateway</title></head>
24<body bgcolor="white">
25<center><h1>502 Bad Gateway</h1></center>
26<hr><center>nginx</center>
27</body>
28</html>
29
30
Update
It seems that bintray was "sunsetted" so I'm not expecting it to come back. I've replaced jcenter()
with mavenCentral()
and it seemed to help, but I still get errors like the following:
1* What went wrong:
2Could not determine the dependencies of task ':react-native-intercom:generateDebugRFile'.
3> Could not resolve all task dependencies for configuration ':react-native-intercom:debugRuntimeClasspath'.
4 > Could not resolve com.facebook.react:react-native:+.
5 Required by:
6 project :react-native-intercom
7 > Failed to list versions for com.facebook.react:react-native.
8 > Unable to load Maven meta-data from https://dl.bintray.com/intercom/intercom-maven/com/facebook/react/react-native/maven-metadata.xml.
9 > Could not get resource 'https://dl.bintray.com/intercom/intercom-maven/com/facebook/react/react-native/maven-metadata.xml'.
10 > Could not GET 'https://dl.bintray.com/intercom/intercom-maven/com/facebook/react/react-native/maven-metadata.xml'. Received status code 502 from server: Bad Gateway
11dara@dara-beast:~/DAD/rn-app$ curl http://dl.bintray.com/intercom/intercom-maven/com/facebook/react/react-native/maven-metadata.xml
12<html>
13<head><title>502 Bad Gateway</title></head>
14<body bgcolor="white">
15<center><h1>502 Bad Gateway</h1></center>
16<hr><center>nginx</center>
17</body>
18</html>
19
20
21dara@dara-beast:~/DAD/rn-app$ curl https://dl.bintray.com/
22<html>
23<head><title>502 Bad Gateway</title></head>
24<body bgcolor="white">
25<center><h1>502 Bad Gateway</h1></center>
26<hr><center>nginx</center>
27</body>
28</html>
29
30* What went wrong:
31Could not determine the dependencies of task ':app:mergeDebugAssets'.
32> Could not resolve all task dependencies for configuration ':app:debugRuntimeClasspath'.
33 > Could not find com.facebook.yoga:proguard-annotations:1.14.1.
34 Searched in the following locations:
35 - file:/home/dara/.m2/repository/com/facebook/yoga/proguard-annotations/1.14.1/proguard-annotations-1.14.1.pom
36 - file:/home/dara/DAD/rn-app/node_modules/react-native/android/com/facebook/yoga/proguard-annotations/1.14.1/proguard-annotations-1.14.1.pom
37 - file:/home/dara/DAD/rn-app/node_modules/jsc-android/dist/com/facebook/yoga/proguard-annotations/1.14.1/proguard-annotations-1.14.1.pom
38 - https://dl.google.com/dl/android/maven2/com/facebook/yoga/proguard-annotations/1.14.1/proguard-annotations-1.14.1.pom
39 - https://repo.maven.apache.org/maven2/com/facebook/yoga/proguard-annotations/1.14.1/proguard-annotations-1.14.1.pom
40 - https://www.jitpack.io/com/facebook/yoga/proguard-annotations/1.14.1/proguard-annotations-1.14.1.pom
41 - https://maven.google.com/com/facebook/yoga/proguard-annotations/1.14.1/proguard-annotations-1.14.1.pom
42 - https://sdk.squareup.com/public/android/com/facebook/yoga/proguard-annotations/1.14.1/proguard-annotations-1.14.1.pom
43 Required by:
44 project :app > com.facebook.react:react-native:0.63.4
45 > Could not find com.facebook.fbjni:fbjni-java-only:0.0.3.
46 Searched in the following locations:
47 - file:/home/dara/.m2/repository/com/facebook/fbjni/fbjni-java-only/0.0.3/fbjni-java-only-0.0.3.pom
48 - file:/home/dara/DAD/rn-app/node_modules/react-native/android/com/facebook/fbjni/fbjni-java-only/0.0.3/fbjni-java-only-0.0.3.pom
49 - file:/home/dara/DAD/rn-app/node_modules/jsc-android/dist/com/facebook/fbjni/fbjni-java-only/0.0.3/fbjni-java-only-0.0.3.pom
50 - https://dl.google.com/dl/android/maven2/com/facebook/fbjni/fbjni-java-only/0.0.3/fbjni-java-only-0.0.3.pom
51 - https://repo.maven.apache.org/maven2/com/facebook/fbjni/fbjni-java-only/0.0.3/fbjni-java-only-0.0.3.pom
52 - https://www.jitpack.io/com/facebook/fbjni/fbjni-java-only/0.0.3/fbjni-java-only-0.0.3.pom
53 - https://maven.google.com/com/facebook/fbjni/fbjni-java-only/0.0.3/fbjni-java-only-0.0.3.pom
54 - https://sdk.squareup.com/public/android/com/facebook/fbjni/fbjni-java-only/0.0.3/fbjni-java-only-0.0.3.pom
55 Required by:
56 project :app > com.facebook.react:react-native:0.63.4
57
I tried to exclude yoga and others in the build file but it doesn't work. I have no idea what to do.
ANSWER
Answered 2021-Dec-01 at 16:46It works now.
I reset my hours of changes to master and it works. Leaving this here for future people who have this error - don't trust the bintray status page, just wait. I read somewhere during my research that it will stay up indefinitely read only.
QUESTION
Why URL re-writing is not working when I do not use slash at the end?
Asked 2022-Mar-13 at 20:40I have a simple ingress configuration file-
1apiVersion: networking.k8s.io/v1
2kind: Ingress
3metadata:
4 annotations:
5 nginx.ingress.kubernetes.io/rewrite-target: /link2/link3/
6 name: tut-ingress
7 namespace: default
8spec:
9 rules:
10 - host: tutorial.com
11 http:
12 paths:
13 - path: /link1/
14 pathType: Prefix
15 backend:
16 service:
17 name: nginx-ingress-tut-service
18 port:
19 number: 8080
20
in which requests coming to /link1
or /link1/
are rewritten to
/link2/link3/
.
When I access it using http://tutorial.com/link1/
I am shown the correct result but when I access it using
http://tutorial.com/link1
, I get a 404 not found.
The nginx-ingress-tut-service
has the following endpoints-
/
/link1
/link2/link3
I am a beginner in the web domain, any help will be appreciated.
When I change it to-
1apiVersion: networking.k8s.io/v1
2kind: Ingress
3metadata:
4 annotations:
5 nginx.ingress.kubernetes.io/rewrite-target: /link2/link3/
6 name: tut-ingress
7 namespace: default
8spec:
9 rules:
10 - host: tutorial.com
11 http:
12 paths:
13 - path: /link1/
14 pathType: Prefix
15 backend:
16 service:
17 name: nginx-ingress-tut-service
18 port:
19 number: 8080
20- path: /link1
21
it starts working fine, but can anybody tell why is it not working with /link1/
.
Helpful resources - https://kubernetes.io/docs/concepts/services-networking/ingress/#examples
https://kubernetes.github.io/ingress-nginx/examples/rewrite/
Edit- Please also explain what happens when you write a full HTTP link in
nginx.ingress.kubernetes.io/rewrite-target
ANSWER
Answered 2022-Mar-13 at 20:40The answer is posted in the comment:
Well,
/link1/
is not a prefix of/link1
because a prefix must be the same length or longer than the target string
If you have
1apiVersion: networking.k8s.io/v1
2kind: Ingress
3metadata:
4 annotations:
5 nginx.ingress.kubernetes.io/rewrite-target: /link2/link3/
6 name: tut-ingress
7 namespace: default
8spec:
9 rules:
10 - host: tutorial.com
11 http:
12 paths:
13 - path: /link1/
14 pathType: Prefix
15 backend:
16 service:
17 name: nginx-ingress-tut-service
18 port:
19 number: 8080
20- path: /link1
21- path: /link1/
22
the string to match will have to have a /
character at the end of the path. Everything works correctly. In this situation if you try to access by the link http://tutorial.com/link1
you will get 404 error, because ingress was expecting http://tutorial.com/link1/
.
For more you can see examples of rewrite rule and documentation about path types:
Each path in an Ingress is required to have a corresponding path type. Paths that do not include an explicit
pathType
will fail validation. There are three supported path types:
ImplementationSpecific
: With this path type, matching is up to the IngressClass. Implementations can treat this as a separatepathType
or treat it identically toPrefix
orExact
path types.
Exact
: Matches the URL path exactly and with case sensitivity.
Prefix
: Matches based on a URL path prefix split by/
. Matching is case sensitive and done on a path element by element basis. A path element refers to the list of labels in the path split by the/
separator. A request is a match for path p if every p is an element-wise prefix of p of the request path.
EDIT: Based on documentation this should work, but it looks like there is a fresh problem with nginx ingress. The problem is still unresolved. You can use workaround posted in this topic or try to change your you similar to this:
1apiVersion: networking.k8s.io/v1
2kind: Ingress
3metadata:
4 annotations:
5 nginx.ingress.kubernetes.io/rewrite-target: /link2/link3/
6 name: tut-ingress
7 namespace: default
8spec:
9 rules:
10 - host: tutorial.com
11 http:
12 paths:
13 - path: /link1/
14 pathType: Prefix
15 backend:
16 service:
17 name: nginx-ingress-tut-service
18 port:
19 number: 8080
20- path: /link1
21- path: /link1/
22- path: /link1(/|$)
23
QUESTION
Share media between multiple django(VMs) servers
Asked 2022-Jan-15 at 10:58We have deployed a django server (nginx/gunicorn/django) but to scale the server there are multiple instances of same django application running.
Here is the diagram (architecture):
Each blue rectangle is a Virtual Machine.
HAProxy sends all request to example.com/admin to Server 3.other requests are divided between Server 1 and Server 2.(load balance).
Old Problem:
Each machine has a media folder and when admin Uploads something the uploaded media is only on Server 3. (normal users can't upload anything)
We solved this by sending all requests to example.com/media/* to Server 3 and nginx from Server3 serves all static files and media.
Problem right now
We are also using sorl-thumbnail.
When a requests comes for example.com/,sorl-thumbnail tries to access the media file but it doesn't exist on this machine because it's on Server3.
So now all requests to that machine(server 1 or 2) get 404 for that media file.
One solution that comes to mind is to make a shared partition between all 3 machines and use it as media. Another solution is to sync all media folders after each upload but this solution has problem and that is we have almost 2000 requests per second and sometimes sync might not be fast enough and sorl-thumbnail creates the database record of empty file and 404 happens.
Thanks in advance and sorry for long question.
ANSWER
Answered 2021-Dec-26 at 19:53You should use an object store to save and serve your user uploaded files. django-storages makes the implementation really simple.
If you don’t want to use cloud based AWS S3 or equivalent, you can host your own on-prem S3 compatible object store with minio.
On your current setup I don’t see any easy way to fix where the number of vm s are dynamic depending on load.
If you have deployment automation then maybe try out rsync so that the vm takes care of syncing files with other vms.
QUESTION
Why do I have to edit /etc/hosts just sometimes when using nginx-ingress controller and resources in my local k8s environment?
Asked 2022-Jan-03 at 16:11Not sure if this is OS specific, but on my M1 Mac, I'm installing the Nginx controller and resource example located in the official Quick Start guide for the controller. for Docker Desktop for Mac. The instructions are as follows:
1// Create the Ingress
2helm upgrade --install ingress-nginx ingress-nginx \
3 --repo https://kubernetes.github.io/ingress-nginx \
4 --namespace ingress-nginx --create-namespace
5
6// Pre-flight checks
7kubectl get pods --namespace=ingress-nginx
8
9kubectl wait --namespace ingress-nginx \
10 --for=condition=ready pod \
11 --selector=app.kubernetes.io/component=controller \
12 --timeout=120s
13
14// and finally, deploy and test the resource.
15kubectl create deployment demo --image=httpd --port=80
16kubectl expose deployment demo
17
18kubectl create ingress demo-localhost --class=nginx \
19 --rule=demo.localdev.me/*=demo:80
20
21kubectl port-forward --namespace=ingress-nginx service/ingress-nginx-controller 8080:80
22
I noticed that the instructions did not mention having to edit the /etc/hosts
file, which I found strange. And, when I tested it by putting demo.localdev.me:8080
into the browser, it did work as expected!
But why? What happened that an application inside of a docker container was able to influence behavior on my host machine and intercept its web traffic without me having to edit the /etc/hosts
file?
For my next test, I re-executed everything above with the only change being that I switched demo
to demo2
. That did not work. I did have to go into /etc/hosts
and add demo2.localdev.me 127.0.0.1
as an entry. After that both demo and demo2 work as expected.
Why is this happening? Not having to edit the /etc/hosts file is appealing. Is there a way to configure it so that they all work? How would I turn it "off" from happening automatically if I needed to route traffic back out to the internet rather than my local machine?
ANSWER
Answered 2022-Jan-03 at 16:11I replicated your issue and got a similar behaviour on the Ubuntu 20.04.3 OS.
The problem is that NGINX Ingress controller Local testing guide did not mention that demo.localdev.me
address points to 127.0.0.1
- that's why it works without editing /etc/hosts
or /etc/resolve.conf
file. Probably it's something like *.localtest.me
addresses:
Here’s how it works. The entire domain name localtest.me—and all wildcard entries—point to 127.0.0.1. So without any changes to your host file you can immediate start testing with a local URL.
Also good and detailed explanation in this topic.
So Docker Desktop / Kubernetes change nothing on your host.
The address demo2.localdev.me
also points to 127.0.0.1
, so it should work as well for you - and as I tested in my environment the behaviour was exactly the same as for the demo.localdev.me
.
You may run nslookup
command and check which IP address is pointed to the specific domain name, for example:
1// Create the Ingress
2helm upgrade --install ingress-nginx ingress-nginx \
3 --repo https://kubernetes.github.io/ingress-nginx \
4 --namespace ingress-nginx --create-namespace
5
6// Pre-flight checks
7kubectl get pods --namespace=ingress-nginx
8
9kubectl wait --namespace ingress-nginx \
10 --for=condition=ready pod \
11 --selector=app.kubernetes.io/component=controller \
12 --timeout=120s
13
14// and finally, deploy and test the resource.
15kubectl create deployment demo --image=httpd --port=80
16kubectl expose deployment demo
17
18kubectl create ingress demo-localhost --class=nginx \
19 --rule=demo.localdev.me/*=demo:80
20
21kubectl port-forward --namespace=ingress-nginx service/ingress-nginx-controller 8080:80
22user@shell:~$ nslookup demo2.localdev.me
23Server: 127.0.0.53
24Address: 127.0.0.53#53
25
26Non-authoritative answer:
27Name: demo2.localdev.me
28Address: 127.0.0.1
29
You may try to do some tests with other hosts name, like some existing ones or no-existing then of course it won't work because the address won't be resolved to the 127.0.0.1
thus it won't be forwarded to the Ingress NGINX controller. In these cases, you can edit /etc/hosts
(as you did) or use curl
flag -H
, for example:
I created the ingress using following command:
1// Create the Ingress
2helm upgrade --install ingress-nginx ingress-nginx \
3 --repo https://kubernetes.github.io/ingress-nginx \
4 --namespace ingress-nginx --create-namespace
5
6// Pre-flight checks
7kubectl get pods --namespace=ingress-nginx
8
9kubectl wait --namespace ingress-nginx \
10 --for=condition=ready pod \
11 --selector=app.kubernetes.io/component=controller \
12 --timeout=120s
13
14// and finally, deploy and test the resource.
15kubectl create deployment demo --image=httpd --port=80
16kubectl expose deployment demo
17
18kubectl create ingress demo-localhost --class=nginx \
19 --rule=demo.localdev.me/*=demo:80
20
21kubectl port-forward --namespace=ingress-nginx service/ingress-nginx-controller 8080:80
22user@shell:~$ nslookup demo2.localdev.me
23Server: 127.0.0.53
24Address: 127.0.0.53#53
25
26Non-authoritative answer:
27Name: demo2.localdev.me
28Address: 127.0.0.1
29kubectl create ingress demo-localhost --class=nginx --rule=facebook.com/*=demo:80
30
Then I started port-forwarding and I run:
1// Create the Ingress
2helm upgrade --install ingress-nginx ingress-nginx \
3 --repo https://kubernetes.github.io/ingress-nginx \
4 --namespace ingress-nginx --create-namespace
5
6// Pre-flight checks
7kubectl get pods --namespace=ingress-nginx
8
9kubectl wait --namespace ingress-nginx \
10 --for=condition=ready pod \
11 --selector=app.kubernetes.io/component=controller \
12 --timeout=120s
13
14// and finally, deploy and test the resource.
15kubectl create deployment demo --image=httpd --port=80
16kubectl expose deployment demo
17
18kubectl create ingress demo-localhost --class=nginx \
19 --rule=demo.localdev.me/*=demo:80
20
21kubectl port-forward --namespace=ingress-nginx service/ingress-nginx-controller 8080:80
22user@shell:~$ nslookup demo2.localdev.me
23Server: 127.0.0.53
24Address: 127.0.0.53#53
25
26Non-authoritative answer:
27Name: demo2.localdev.me
28Address: 127.0.0.1
29kubectl create ingress demo-localhost --class=nginx --rule=facebook.com/*=demo:80
30user@shell:~$ curl -H "Host: facebook.com" localhost:8080
31<html><body><h1>It works!</h1></body></html>
32
You wrote:
For my next test, I re-executed everything above with the only change being that I switched
demo
todemo2
. That did not work. I did have to go into/etc/hosts
and adddemo2.localdev.me 127.0.0.1
as an entry. After that both demo and demo2 work as expected.
Well, that sounds strange, could you run nslookup demo2.localdev.me
without adding an entry in the /etc/hosts
and then check? Are you sure you performed the correct query before, did you not change something on the Kubernetes configuration side? As I tested (and presented above), it should work exactly the same as for demo.localdev.me
.
QUESTION
How to create index.html using dockerfile?
Asked 2021-Dec-23 at 14:40I'm trying to create a simple static web using nginx, and want to have everything created by Dockerfile, the problem is whenever I tried to create an index.html file, it comes out with error, I even tried to test it and its working with "index.htm" but not with the correct format.
1FROM centos:7
2#update and install nginx section
3RUN yum update -y
4RUN yum install -y epel-release
5RUN yum install -y nginx
6#create path and add index.html
7WORKDIR /usr/share/nginx/html
8
9#this one working with no issue
10RUN touch index.htm
11#this one will get failed
12RUN touch index.html
13
14EXPOSE 80/tcp
15
16CMD ["nginx", "-g", "daemon off;"]
17
and this is the error output:
1FROM centos:7
2#update and install nginx section
3RUN yum update -y
4RUN yum install -y epel-release
5RUN yum install -y nginx
6#create path and add index.html
7WORKDIR /usr/share/nginx/html
8
9#this one working with no issue
10RUN touch index.htm
11#this one will get failed
12RUN touch index.html
13
14EXPOSE 80/tcp
15
16CMD ["nginx", "-g", "daemon off;"]
17majid@DESKTOP-39CBKO0:~/nginx_simple_web$ docker build -t simple-web:v1 .
18[+] Building 3.8s (11/11) FINISHED
19 => [internal] load build definition from Dockerfile 0.0s
20 => => transferring dockerfile: 381B 0.0s
21 => [internal] load .dockerignore 0.0s
22 => => transferring context: 2B 0.0s
23 => [internal] load metadata for docker.io/library/centos:7 3.4s
24 => [auth] library/centos:pull token for registry-1.docker.io 0.0s
25 => [1/7] FROM docker.io/library/centos:7@sha256:9d4bcbbb213dfd745b58be38b13b996ebb5ac315fe75711bd618426a630 0.0s
26 => CACHED [2/7] RUN yum update -y 0.0s
27 => CACHED [3/7] RUN yum install -y epel-release 0.0s
28 => CACHED [4/7] RUN yum install -y nginx 0.0s
29 => CACHED [5/7] WORKDIR /usr/share/nginx/html 0.0s
30 => CACHED [6/7] RUN touch index.htm 0.0s
31 => ERROR [7/7] RUN touch index.html 0.4s
32------
33 > [7/7] RUN touch index.html:
34#11 0.357 touch: cannot touch 'index.html': No such file or directory
35------
36executor failed running [/bin/sh -c touch index.html]: exit code: 1
37majid@DESKTOP-39CBKO0:~/nginx_simple_web$
38
ANSWER
Answered 2021-Dec-23 at 11:45you should create a file and you can use
COPY index.html index.html
command into Dockerfile to copy a file into image when build
or use
echo " " > index.html command to create a file
QUESTION
Wrong PHP version used when installing composer with Alpine's apk command
Asked 2021-Dec-23 at 11:20I've got a docker image running 8.0 and want to upgrade to 8.1. I have updated the image to run with PHP 8.1 and want to update the dependencies in it.
The new image derives from php:8.1.1-fpm-alpine3.15
I've updated the composer.json
and changed require.php
to ^8.1
but ran into the following message when running composer upgrade
:
1Root composer.json requires php ^8.1 but your php version (8.0.14) does not satisfy that requirement.
2
What I find dazzling is that the composer incorrectly identifies PHP version. I used two commands to determine that:
1Root composer.json requires php ^8.1 but your php version (8.0.14) does not satisfy that requirement.
2which php # returns only /usr/local/bin/php
3/usr/local/bin/php -v # returns PHP 8.1.1 (cli) (built: Dec 18 2021 01:38:53) (NTS)
4
So far I've tried:
- Checking
php -v
- Clearing composer cache
- Rebuilding image
Composer version 2.1.12 2021-11-09 16:02:04
1Root composer.json requires php ^8.1 but your php version (8.0.14) does not satisfy that requirement.
2which php # returns only /usr/local/bin/php
3/usr/local/bin/php -v # returns PHP 8.1.1 (cli) (built: Dec 18 2021 01:38:53) (NTS)
4composer check-platform-reqs | grep php
5# returns:
6# ...
7# php 8.0.14 project/name requires php (^8.1) failed
8
All of the commands above (excluding docker commands) are being ran in the container
Dockerfile:
1Root composer.json requires php ^8.1 but your php version (8.0.14) does not satisfy that requirement.
2which php # returns only /usr/local/bin/php
3/usr/local/bin/php -v # returns PHP 8.1.1 (cli) (built: Dec 18 2021 01:38:53) (NTS)
4composer check-platform-reqs | grep php
5# returns:
6# ...
7# php 8.0.14 project/name requires php (^8.1) failed
8FROM php:8.1.1-fpm-alpine3.15
9
10ENV TZ=Europe/London
11
12# Install php lib deps
13RUN apk update && apk upgrade
14RUN apk add --update libzip-dev \
15 zip \
16 unzip \
17 libpng-dev \
18 nginx \
19 supervisor \
20 git \
21 curl \
22 shadow \
23 composer \
24 yarn && rm -rf /var/cache/apk/*
25
26RUN usermod -u 1000 www-data
27RUN usermod -d /var/www www-data
28
29RUN mkdir -p /run/nginx && chown www-data:www-data /run/nginx
30
31ENV SUPERCRONIC_URL=https://github.com/aptible/supercronic/releases/download/v0.1.9/supercronic-linux-amd64 \
32 SUPERCRONIC=supercronic-linux-amd64 \
33 SUPERCRONIC_SHA1SUM=5ddf8ea26b56d4a7ff6faecdd8966610d5cb9d85
34
35RUN curl -fsSLO "$SUPERCRONIC_URL" \
36 && echo "${SUPERCRONIC_SHA1SUM} ${SUPERCRONIC}" | sha1sum -c - \
37 && chmod +x "$SUPERCRONIC" \
38 && mv "$SUPERCRONIC" "/usr/local/bin/${SUPERCRONIC}" \
39 && ln -s "/usr/local/bin/${SUPERCRONIC}" /usr/local/bin/supercronic
40
41# Install and enable php extensions
42RUN docker-php-ext-install sockets mysqli pdo_mysql zip gd bcmath > /dev/null
43
44ARG ENV="development"
45# Xdebug install
46RUN if [ $ENV = "development" ] ; then \
47 apk add --no-cache $PHPIZE_DEPS; \
48 pecl install xdebug > /dev/null; \
49 docker-php-ext-enable xdebug; \
50 echo "error_reporting = E_ALL" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini; \
51 echo "display_startup_errors = On" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini; \
52 echo "display_errors = On" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini; \
53 echo "xdebug.remote_enable=1" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini; \
54 fi ;
55
56# Setup working directory
57RUN chown -R www-data:www-data /var/www
58WORKDIR /var/www
59USER www-data
60
61
62# Install dependencies
63#RUN if [ $ENV = "development" ] ; then \
64## composer install -n; \
65# else \
66## composer install -n --no-dev; \
67# fi ;
68
69# Generate doctrine proxies
70
ANSWER
Answered 2021-Dec-23 at 11:20Huh. This surprised me a bit.
composer is correctly reporting the PHP version it's using. The problem is that it's not using the "correct" PHP interpreter.
The issue arises because of how you are installing composer.
Apparently by doing apk add composer
another version of PHP gets installed (you can find it on /usr/bin/php8
, this is the one on version 8.0.14).
Instead of letting apk
install composer for you, you can do it manually. There is nothing much to install it in any case, no need to go through the package manager. Particularly since PHP has not been installed via the package manager on your base image.
I've just removed the line containing composer
from the apk add --update
command, and added this somewhere below:
1Root composer.json requires php ^8.1 but your php version (8.0.14) does not satisfy that requirement.
2which php # returns only /usr/local/bin/php
3/usr/local/bin/php -v # returns PHP 8.1.1 (cli) (built: Dec 18 2021 01:38:53) (NTS)
4composer check-platform-reqs | grep php
5# returns:
6# ...
7# php 8.0.14 project/name requires php (^8.1) failed
8FROM php:8.1.1-fpm-alpine3.15
9
10ENV TZ=Europe/London
11
12# Install php lib deps
13RUN apk update && apk upgrade
14RUN apk add --update libzip-dev \
15 zip \
16 unzip \
17 libpng-dev \
18 nginx \
19 supervisor \
20 git \
21 curl \
22 shadow \
23 composer \
24 yarn && rm -rf /var/cache/apk/*
25
26RUN usermod -u 1000 www-data
27RUN usermod -d /var/www www-data
28
29RUN mkdir -p /run/nginx && chown www-data:www-data /run/nginx
30
31ENV SUPERCRONIC_URL=https://github.com/aptible/supercronic/releases/download/v0.1.9/supercronic-linux-amd64 \
32 SUPERCRONIC=supercronic-linux-amd64 \
33 SUPERCRONIC_SHA1SUM=5ddf8ea26b56d4a7ff6faecdd8966610d5cb9d85
34
35RUN curl -fsSLO "$SUPERCRONIC_URL" \
36 && echo "${SUPERCRONIC_SHA1SUM} ${SUPERCRONIC}" | sha1sum -c - \
37 && chmod +x "$SUPERCRONIC" \
38 && mv "$SUPERCRONIC" "/usr/local/bin/${SUPERCRONIC}" \
39 && ln -s "/usr/local/bin/${SUPERCRONIC}" /usr/local/bin/supercronic
40
41# Install and enable php extensions
42RUN docker-php-ext-install sockets mysqli pdo_mysql zip gd bcmath > /dev/null
43
44ARG ENV="development"
45# Xdebug install
46RUN if [ $ENV = "development" ] ; then \
47 apk add --no-cache $PHPIZE_DEPS; \
48 pecl install xdebug > /dev/null; \
49 docker-php-ext-enable xdebug; \
50 echo "error_reporting = E_ALL" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini; \
51 echo "display_startup_errors = On" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini; \
52 echo "display_errors = On" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini; \
53 echo "xdebug.remote_enable=1" >> /usr/local/etc/php/conf.d/docker-php-ext-xdebug.ini; \
54 fi ;
55
56# Setup working directory
57RUN chown -R www-data:www-data /var/www
58WORKDIR /var/www
59USER www-data
60
61
62# Install dependencies
63#RUN if [ $ENV = "development" ] ; then \
64## composer install -n; \
65# else \
66## composer install -n --no-dev; \
67# fi ;
68
69# Generate doctrine proxies
70 RUN php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');" && \
71 php -r "if (hash_file('sha384', 'composer-setup.php') === '906a84df04cea2aa72f40b5f787e49f22d4c2f19492ac310e8cba5b96ac8b64115ac402c8cd292b8a03482574915d1a8') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;" && \
72 php composer-setup.php && \
73 php -r "unlink('composer-setup.php');" && \
74 mv composer.phar /usr/local/bin/composer;
75
You could also simply download the latest composer PHAR file from here, and add it to the image, depending on how you want to go.
Now there is a single PHP version, and composer will run correctly on PHP 8.1.1.
QUESTION
Docker: COPY failed: file not found in build context (Dockerfile)
Asked 2021-Dec-21 at 14:57I'd like to instruct Docker to COPY
my certificates from the local /etc/
folder on my Ubuntu machine.
I get the error:
COPY failed: file not found in build context or excluded by .dockerignore: stat etc/.auth_keys/fullchain.pem: file does not exist
I have not excluded in .dockerignore
How can I do it?
Dockerfile:
1FROM nginx:1.21.3-alpine
2
3RUN rm /etc/nginx/conf.d/default.conf
4RUN mkdir /etc/nginx/ssl
5COPY nginx.conf /etc/nginx/conf.d
6COPY ./etc/.auth_keys/fullchain.pem /etc/nginx/ssl/
7COPY ./etc/.auth_keys/privkey.pem /etc/nginx/ssl/
8
9WORKDIR /usr/src/app
10
I have also tried without the dot
--> same error
1FROM nginx:1.21.3-alpine
2
3RUN rm /etc/nginx/conf.d/default.conf
4RUN mkdir /etc/nginx/ssl
5COPY nginx.conf /etc/nginx/conf.d
6COPY ./etc/.auth_keys/fullchain.pem /etc/nginx/ssl/
7COPY ./etc/.auth_keys/privkey.pem /etc/nginx/ssl/
8
9WORKDIR /usr/src/app
10COPY /etc/.auth_keys/fullchain.pem /etc/nginx/ssl/
11COPY /etc/.auth_keys/privkey.pem /etc/nginx/ssl/
12
By placing the folder .auth_keys
next to the Dockerfile --> works, but not desireable
1FROM nginx:1.21.3-alpine
2
3RUN rm /etc/nginx/conf.d/default.conf
4RUN mkdir /etc/nginx/ssl
5COPY nginx.conf /etc/nginx/conf.d
6COPY ./etc/.auth_keys/fullchain.pem /etc/nginx/ssl/
7COPY ./etc/.auth_keys/privkey.pem /etc/nginx/ssl/
8
9WORKDIR /usr/src/app
10COPY /etc/.auth_keys/fullchain.pem /etc/nginx/ssl/
11COPY /etc/.auth_keys/privkey.pem /etc/nginx/ssl/
12COPY /.auth_keys/fullchain.pem /etc/nginx/ssl/
13COPY /.auth_keys/privkey.pem /etc/nginx/ssl/
14
ANSWER
Answered 2021-Nov-05 at 11:42The docker context is the directory the Dockerfile is located in. If you want to build an image that is one of the restrictions you have to face.
In this documentation you can see how contexts can be switched, but to keep it simple just consider the same directory to be the context. Note; this also doesn't work with symbolic links.
So your observation was correct and you need to place the files you need to copy in the same directory.
Alternatively, if you don't need to copy them but still have them available at runtime you could opt for a mount. I can imagine this not working in your case because you likely need the files at startup of the container.
Community Discussions contain sources that include Stack Exchange Network
Tutorials and Learning Resources in Nginx
Tutorials and Learning Resources are not available at this moment for Nginx