Explore all Scraper open source software, libraries, packages, source code, cloud functions and APIs.

Popular New Releases in Scraper

you-get

0.4.1545

instagram-scraper

v1.10.3

ferret

Fixed memory leak

instaloader

Version 4.9

scrape-it

5.3.2

Popular Libraries in Scraper

you-get

by soimort doticonpythondoticon

star image 41643 doticonNOASSERTION

:arrow_double_down: Dumb downloader that scrapes the web

requests-html

by psf doticonpythondoticon

star image 12251 doticonMIT

Pythonic HTML Parsing for Humans™

twint

by twintproject doticonpythondoticon

star image 11366 doticonMIT

An advanced Twitter scraping & OSINT tool written in Python that doesn't use Twitter's API, allowing you to scrape a user's followers, following, Tweets and more while evading most API limitations.

newspaper

by codelucas doticonpythondoticon

star image 11277 doticonNOASSERTION

News, full-text, and article metadata extraction in Python 3. Advanced docs:

Goutte

by FriendsOfPHP doticonphpdoticon

star image 8556 doticonMIT

Goutte, a simple PHP Web Scraper

portia

by scrapinghub doticonpythondoticon

star image 8200 doticonNOASSERTION

Visual scraping for Scrapy

instagram-scraper

by arc298 doticonpythondoticon

star image 5727 doticonUnlicense

Scrapes an instagram user's photos and videos

x-ray

by matthewmueller doticonjavascriptdoticon

star image 5563 doticonMIT

The next web scraper. See through the <html> noise.

ferret

by MontFerret doticongodoticon

star image 4894 doticonApache-2.0

Declarative web scraping

Trending New libraries in Scraper

autoscraper

by alirezamika doticonpythondoticon

star image 3565 doticonMIT

A Smart, Automatic, Fast and Lightweight Web Scraper for Python

Emby.Plugins.JavScraper

by JavScraper doticoncsharpdoticon

star image 1086 doticon

Emby/Jellyfin 的一个日本电影刮削器插件,可以从某些网站抓取影片信息。

secret-agent

by ulixee doticontypescriptdoticon

star image 444 doticonMIT

The web browser that's built for scraping.

coronadatascraper

by covidatlas doticonhtmldoticon

star image 372 doticonBSD-2-Clause

COVID-19 Coronavirus data scraped from government and curated data sources.

tinking

by baptisteArno doticontypescriptdoticon

star image 348 doticonGPL-3.0

🧶 Extract data from any website without code, just clicks.

Scrapera

by DarshanDeshpande doticonpythondoticon

star image 278 doticonMIT

A universal package of scraper scripts for humans

Scweet

by Altimis doticonpythondoticon

star image 275 doticonMIT

A simple and unlimited twitter scraper : scape tweets, likes, retweets, following, followers, user info, images...

PHPScraper

by spekulatius doticonphpdoticon

star image 208 doticonGPL-3.0

PHP Scraper - an highly opinionated web-interface for PHP

JavSP

by Yuukiy doticonpythondoticon

star image 199 doticonGPL-3.0

汇总多站点数据的AV元数据刮削器

Top Authors in Scraper

1

scrapehero

9 Libraries

star icon225

2

prakash-simhandri

7 Libraries

star icon54

3

megadose

6 Libraries

star icon216

4

sinkaroid

6 Libraries

star icon42

5

PacktPublishing

6 Libraries

star icon207

6

tvl

5 Libraries

star icon19

7

projectivemotion

5 Libraries

star icon36

8

maksimKorzh

5 Libraries

star icon27

9

yogesshraj

5 Libraries

star icon12

10

hansputera

5 Libraries

star icon22

1

9 Libraries

star icon225

2

7 Libraries

star icon54

3

6 Libraries

star icon216

4

6 Libraries

star icon42

5

6 Libraries

star icon207

6

5 Libraries

star icon19

7

5 Libraries

star icon36

8

5 Libraries

star icon27

9

5 Libraries

star icon12

10

5 Libraries

star icon22

Trending Kits in Scraper


Python is a popular programming language that is used in many different fields, including data science, web development, and artificial intelligence.


It was first released in 1991 and has since become one of the most popular programming languages due to its simplicity and ease of use. This kit has a simple News Scrapper which creates a CSV of top ten news in various categories that can be very helpful in data monitoring, extraction and machine learning applications.

For a detailed tutorial on installing & executing the solution as well as learning resources including training & certification opportunities, please visit the OpenWeaver Community

Kit Solution Source

Python Repositories with Example Exercises

Python CLI programs as examples. This list has programs useful for someone who is a beginner and also someone willing to go advance level.

Support

If you need help using this kit, you may reach us at the OpenWeaver Community.

kandi 1-Click Install

Indeed jobs scraper

Scrape jobs from indeed along with details of company posted the jobs.

The scraper provides following information:

💬 Job Description

🌐 Company Website

🤵 CEO Name

💼 Display Title

💵 Extracted Salary

🌐 Website

📈 Company Rating

📊 Company Review Count

📅 Relative Time

📍 Location

📅 Pub Date

🛑 Expired

📌 Job Types

📌 Location Count

📜 Taxonomy Attributes

📉 Ranking Scores

📈 Indeed Apply

🌐 Third Party Apply


XING jobs scraper

Scraping job postings from Xing.com can provide valuable insights for job seekers or companies monitoring the job market. Xing, being a professional networking and career website, hosts numerous job listings across various fields and locations, making it a rich source of employment opportunities.


Linkedin job scraper

Scrape jobs from linkedin jobs search with complete details of job, company, skills, etc. Supports searching jobs by titles, locations, company, salary etc. APIs endpoints are available to integrate the scraper into any software.

Trending Discussions on Scraper

Microk8s dashboard using nginx-ingress via http not working (Error: `no matches for kind &quot;Ingress&quot; in version &quot;extensions/v1beta1&quot;`)

kubernetes dashboard (web ui) has nothing to display

Python Selenium AWS Lambda Change WebGL Vendor/Renderer For Undetectable Headless Scraper

Enable use of images from the local library on Kubernetes

How do i loop through divs using jsoup

chrome extension: Uncaught TypeError: Cannot read properties of undefined (reading 'onClicked')

How to merge data from object A into object B in Python?

Using pod Anti Affinity to force only 1 pod per node

Unable to connect to the server: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it. -Microk8s

Reading Excel file Using PySpark: Failed to find data source: com.crealytics.spark.excel

QUESTION

Microk8s dashboard using nginx-ingress via http not working (Error: `no matches for kind &quot;Ingress&quot; in version &quot;extensions/v1beta1&quot;`)

Asked 2022-Apr-01 at 07:26

I have microk8s v1.22.2 running on Ubuntu 20.04.3 LTS.

Output from /etc/hosts:

1127.0.0.1 localhost
2127.0.1.1 main
3

Excerpt from microk8s status:

1127.0.0.1 localhost
2127.0.1.1 main
3addons:
4  enabled:
5    dashboard            # The Kubernetes dashboard
6    ha-cluster           # Configure high availability on the current node
7    ingress              # Ingress controller for external access
8    metrics-server       # K8s Metrics Server for API access to service metrics
9

I checked for the running dashboard (kubectl get all --all-namespaces):

1127.0.0.1 localhost
2127.0.1.1 main
3addons:
4  enabled:
5    dashboard            # The Kubernetes dashboard
6    ha-cluster           # Configure high availability on the current node
7    ingress              # Ingress controller for external access
8    metrics-server       # K8s Metrics Server for API access to service metrics
9NAMESPACE     NAME                                             READY   STATUS    RESTARTS   AGE
10kube-system   pod/calico-node-2jltr                            1/1     Running   0          23m
11kube-system   pod/calico-kube-controllers-f744bf684-d77hv      1/1     Running   0          23m
12kube-system   pod/metrics-server-85df567dd8-jd6gj              1/1     Running   0          22m
13kube-system   pod/kubernetes-dashboard-59699458b-pb5jb         1/1     Running   0          21m
14kube-system   pod/dashboard-metrics-scraper-58d4977855-94nsp   1/1     Running   0          21m
15ingress       pod/nginx-ingress-microk8s-controller-qf5pm      1/1     Running   0          21m
16
17NAMESPACE     NAME                                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
18default       service/kubernetes                  ClusterIP   10.152.183.1     &lt;none&gt;        443/TCP    23m
19kube-system   service/metrics-server              ClusterIP   10.152.183.81    &lt;none&gt;        443/TCP    22m
20kube-system   service/kubernetes-dashboard        ClusterIP   10.152.183.103   &lt;none&gt;        443/TCP    22m
21kube-system   service/dashboard-metrics-scraper   ClusterIP   10.152.183.197   &lt;none&gt;        8000/TCP   22m
22
23NAMESPACE     NAME                                               DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
24kube-system   daemonset.apps/calico-node                         1         1         1       1            1           kubernetes.io/os=linux   23m
25ingress       daemonset.apps/nginx-ingress-microk8s-controller   1         1         1       1            1           &lt;none&gt;                   22m
26
27NAMESPACE     NAME                                        READY   UP-TO-DATE   AVAILABLE   AGE
28kube-system   deployment.apps/calico-kube-controllers     1/1     1            1           23m
29kube-system   deployment.apps/metrics-server              1/1     1            1           22m
30kube-system   deployment.apps/kubernetes-dashboard        1/1     1            1           22m
31kube-system   deployment.apps/dashboard-metrics-scraper   1/1     1            1           22m
32
33NAMESPACE     NAME                                                   DESIRED   CURRENT   READY   AGE
34kube-system   replicaset.apps/calico-kube-controllers-69d7f794d9     0         0         0       23m
35kube-system   replicaset.apps/calico-kube-controllers-f744bf684      1         1         1       23m
36kube-system   replicaset.apps/metrics-server-85df567dd8              1         1         1       22m
37kube-system   replicaset.apps/kubernetes-dashboard-59699458b         1         1         1       21m
38kube-system   replicaset.apps/dashboard-metrics-scraper-58d4977855   1         1         1       21m
39

I want to expose the microk8s dashboard within my local network to access it through http://main/dashboard/

To do so, I did the following nano ingress.yaml:

1127.0.0.1 localhost
2127.0.1.1 main
3addons:
4  enabled:
5    dashboard            # The Kubernetes dashboard
6    ha-cluster           # Configure high availability on the current node
7    ingress              # Ingress controller for external access
8    metrics-server       # K8s Metrics Server for API access to service metrics
9NAMESPACE     NAME                                             READY   STATUS    RESTARTS   AGE
10kube-system   pod/calico-node-2jltr                            1/1     Running   0          23m
11kube-system   pod/calico-kube-controllers-f744bf684-d77hv      1/1     Running   0          23m
12kube-system   pod/metrics-server-85df567dd8-jd6gj              1/1     Running   0          22m
13kube-system   pod/kubernetes-dashboard-59699458b-pb5jb         1/1     Running   0          21m
14kube-system   pod/dashboard-metrics-scraper-58d4977855-94nsp   1/1     Running   0          21m
15ingress       pod/nginx-ingress-microk8s-controller-qf5pm      1/1     Running   0          21m
16
17NAMESPACE     NAME                                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
18default       service/kubernetes                  ClusterIP   10.152.183.1     &lt;none&gt;        443/TCP    23m
19kube-system   service/metrics-server              ClusterIP   10.152.183.81    &lt;none&gt;        443/TCP    22m
20kube-system   service/kubernetes-dashboard        ClusterIP   10.152.183.103   &lt;none&gt;        443/TCP    22m
21kube-system   service/dashboard-metrics-scraper   ClusterIP   10.152.183.197   &lt;none&gt;        8000/TCP   22m
22
23NAMESPACE     NAME                                               DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
24kube-system   daemonset.apps/calico-node                         1         1         1       1            1           kubernetes.io/os=linux   23m
25ingress       daemonset.apps/nginx-ingress-microk8s-controller   1         1         1       1            1           &lt;none&gt;                   22m
26
27NAMESPACE     NAME                                        READY   UP-TO-DATE   AVAILABLE   AGE
28kube-system   deployment.apps/calico-kube-controllers     1/1     1            1           23m
29kube-system   deployment.apps/metrics-server              1/1     1            1           22m
30kube-system   deployment.apps/kubernetes-dashboard        1/1     1            1           22m
31kube-system   deployment.apps/dashboard-metrics-scraper   1/1     1            1           22m
32
33NAMESPACE     NAME                                                   DESIRED   CURRENT   READY   AGE
34kube-system   replicaset.apps/calico-kube-controllers-69d7f794d9     0         0         0       23m
35kube-system   replicaset.apps/calico-kube-controllers-f744bf684      1         1         1       23m
36kube-system   replicaset.apps/metrics-server-85df567dd8              1         1         1       22m
37kube-system   replicaset.apps/kubernetes-dashboard-59699458b         1         1         1       21m
38kube-system   replicaset.apps/dashboard-metrics-scraper-58d4977855   1         1         1       21m
39apiVersion: extensions/v1beta1
40kind: Ingress
41metadata:
42  annotations:
43    kubernetes.io/ingress.class: public
44    nginx.ingress.kubernetes.io/backend-protocol: &quot;HTTPS&quot;
45  name: dashboard
46  namespace: kube-system
47spec:
48  rules:
49  - host: main
50    http:
51      paths:
52      - backend:
53          serviceName: kubernetes-dashboard
54          servicePort: 443
55        path: /
56

Enabling the ingress-config through kubectl apply -f ingress.yaml gave the following error:

1127.0.0.1 localhost
2127.0.1.1 main
3addons:
4  enabled:
5    dashboard            # The Kubernetes dashboard
6    ha-cluster           # Configure high availability on the current node
7    ingress              # Ingress controller for external access
8    metrics-server       # K8s Metrics Server for API access to service metrics
9NAMESPACE     NAME                                             READY   STATUS    RESTARTS   AGE
10kube-system   pod/calico-node-2jltr                            1/1     Running   0          23m
11kube-system   pod/calico-kube-controllers-f744bf684-d77hv      1/1     Running   0          23m
12kube-system   pod/metrics-server-85df567dd8-jd6gj              1/1     Running   0          22m
13kube-system   pod/kubernetes-dashboard-59699458b-pb5jb         1/1     Running   0          21m
14kube-system   pod/dashboard-metrics-scraper-58d4977855-94nsp   1/1     Running   0          21m
15ingress       pod/nginx-ingress-microk8s-controller-qf5pm      1/1     Running   0          21m
16
17NAMESPACE     NAME                                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
18default       service/kubernetes                  ClusterIP   10.152.183.1     &lt;none&gt;        443/TCP    23m
19kube-system   service/metrics-server              ClusterIP   10.152.183.81    &lt;none&gt;        443/TCP    22m
20kube-system   service/kubernetes-dashboard        ClusterIP   10.152.183.103   &lt;none&gt;        443/TCP    22m
21kube-system   service/dashboard-metrics-scraper   ClusterIP   10.152.183.197   &lt;none&gt;        8000/TCP   22m
22
23NAMESPACE     NAME                                               DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
24kube-system   daemonset.apps/calico-node                         1         1         1       1            1           kubernetes.io/os=linux   23m
25ingress       daemonset.apps/nginx-ingress-microk8s-controller   1         1         1       1            1           &lt;none&gt;                   22m
26
27NAMESPACE     NAME                                        READY   UP-TO-DATE   AVAILABLE   AGE
28kube-system   deployment.apps/calico-kube-controllers     1/1     1            1           23m
29kube-system   deployment.apps/metrics-server              1/1     1            1           22m
30kube-system   deployment.apps/kubernetes-dashboard        1/1     1            1           22m
31kube-system   deployment.apps/dashboard-metrics-scraper   1/1     1            1           22m
32
33NAMESPACE     NAME                                                   DESIRED   CURRENT   READY   AGE
34kube-system   replicaset.apps/calico-kube-controllers-69d7f794d9     0         0         0       23m
35kube-system   replicaset.apps/calico-kube-controllers-f744bf684      1         1         1       23m
36kube-system   replicaset.apps/metrics-server-85df567dd8              1         1         1       22m
37kube-system   replicaset.apps/kubernetes-dashboard-59699458b         1         1         1       21m
38kube-system   replicaset.apps/dashboard-metrics-scraper-58d4977855   1         1         1       21m
39apiVersion: extensions/v1beta1
40kind: Ingress
41metadata:
42  annotations:
43    kubernetes.io/ingress.class: public
44    nginx.ingress.kubernetes.io/backend-protocol: &quot;HTTPS&quot;
45  name: dashboard
46  namespace: kube-system
47spec:
48  rules:
49  - host: main
50    http:
51      paths:
52      - backend:
53          serviceName: kubernetes-dashboard
54          servicePort: 443
55        path: /
56error: unable to recognize &quot;ingress.yaml&quot;: no matches for kind &quot;Ingress&quot; in version &quot;extensions/v1beta1&quot;
57

Help would be much appreciated, thanks!

Update: @harsh-manvar pointed out a mismatch in the config version. I have rewritten ingress.yaml to a very stripped down version:

1127.0.0.1 localhost
2127.0.1.1 main
3addons:
4  enabled:
5    dashboard            # The Kubernetes dashboard
6    ha-cluster           # Configure high availability on the current node
7    ingress              # Ingress controller for external access
8    metrics-server       # K8s Metrics Server for API access to service metrics
9NAMESPACE     NAME                                             READY   STATUS    RESTARTS   AGE
10kube-system   pod/calico-node-2jltr                            1/1     Running   0          23m
11kube-system   pod/calico-kube-controllers-f744bf684-d77hv      1/1     Running   0          23m
12kube-system   pod/metrics-server-85df567dd8-jd6gj              1/1     Running   0          22m
13kube-system   pod/kubernetes-dashboard-59699458b-pb5jb         1/1     Running   0          21m
14kube-system   pod/dashboard-metrics-scraper-58d4977855-94nsp   1/1     Running   0          21m
15ingress       pod/nginx-ingress-microk8s-controller-qf5pm      1/1     Running   0          21m
16
17NAMESPACE     NAME                                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
18default       service/kubernetes                  ClusterIP   10.152.183.1     &lt;none&gt;        443/TCP    23m
19kube-system   service/metrics-server              ClusterIP   10.152.183.81    &lt;none&gt;        443/TCP    22m
20kube-system   service/kubernetes-dashboard        ClusterIP   10.152.183.103   &lt;none&gt;        443/TCP    22m
21kube-system   service/dashboard-metrics-scraper   ClusterIP   10.152.183.197   &lt;none&gt;        8000/TCP   22m
22
23NAMESPACE     NAME                                               DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
24kube-system   daemonset.apps/calico-node                         1         1         1       1            1           kubernetes.io/os=linux   23m
25ingress       daemonset.apps/nginx-ingress-microk8s-controller   1         1         1       1            1           &lt;none&gt;                   22m
26
27NAMESPACE     NAME                                        READY   UP-TO-DATE   AVAILABLE   AGE
28kube-system   deployment.apps/calico-kube-controllers     1/1     1            1           23m
29kube-system   deployment.apps/metrics-server              1/1     1            1           22m
30kube-system   deployment.apps/kubernetes-dashboard        1/1     1            1           22m
31kube-system   deployment.apps/dashboard-metrics-scraper   1/1     1            1           22m
32
33NAMESPACE     NAME                                                   DESIRED   CURRENT   READY   AGE
34kube-system   replicaset.apps/calico-kube-controllers-69d7f794d9     0         0         0       23m
35kube-system   replicaset.apps/calico-kube-controllers-f744bf684      1         1         1       23m
36kube-system   replicaset.apps/metrics-server-85df567dd8              1         1         1       22m
37kube-system   replicaset.apps/kubernetes-dashboard-59699458b         1         1         1       21m
38kube-system   replicaset.apps/dashboard-metrics-scraper-58d4977855   1         1         1       21m
39apiVersion: extensions/v1beta1
40kind: Ingress
41metadata:
42  annotations:
43    kubernetes.io/ingress.class: public
44    nginx.ingress.kubernetes.io/backend-protocol: &quot;HTTPS&quot;
45  name: dashboard
46  namespace: kube-system
47spec:
48  rules:
49  - host: main
50    http:
51      paths:
52      - backend:
53          serviceName: kubernetes-dashboard
54          servicePort: 443
55        path: /
56error: unable to recognize &quot;ingress.yaml&quot;: no matches for kind &quot;Ingress&quot; in version &quot;extensions/v1beta1&quot;
57apiVersion: networking.k8s.io/v1
58kind: Ingress
59metadata:
60  name: dashboard
61  namespace: kube-system
62spec:
63  rules:
64  - http:
65      paths:
66      - path: /dashboard
67        pathType: Prefix
68        backend:
69          service:
70            name: kubernetes-dashboard
71            port:
72              number: 443
73

Applying this works. Also, the ingress rule gets created.

1127.0.0.1 localhost
2127.0.1.1 main
3addons:
4  enabled:
5    dashboard            # The Kubernetes dashboard
6    ha-cluster           # Configure high availability on the current node
7    ingress              # Ingress controller for external access
8    metrics-server       # K8s Metrics Server for API access to service metrics
9NAMESPACE     NAME                                             READY   STATUS    RESTARTS   AGE
10kube-system   pod/calico-node-2jltr                            1/1     Running   0          23m
11kube-system   pod/calico-kube-controllers-f744bf684-d77hv      1/1     Running   0          23m
12kube-system   pod/metrics-server-85df567dd8-jd6gj              1/1     Running   0          22m
13kube-system   pod/kubernetes-dashboard-59699458b-pb5jb         1/1     Running   0          21m
14kube-system   pod/dashboard-metrics-scraper-58d4977855-94nsp   1/1     Running   0          21m
15ingress       pod/nginx-ingress-microk8s-controller-qf5pm      1/1     Running   0          21m
16
17NAMESPACE     NAME                                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
18default       service/kubernetes                  ClusterIP   10.152.183.1     &lt;none&gt;        443/TCP    23m
19kube-system   service/metrics-server              ClusterIP   10.152.183.81    &lt;none&gt;        443/TCP    22m
20kube-system   service/kubernetes-dashboard        ClusterIP   10.152.183.103   &lt;none&gt;        443/TCP    22m
21kube-system   service/dashboard-metrics-scraper   ClusterIP   10.152.183.197   &lt;none&gt;        8000/TCP   22m
22
23NAMESPACE     NAME                                               DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
24kube-system   daemonset.apps/calico-node                         1         1         1       1            1           kubernetes.io/os=linux   23m
25ingress       daemonset.apps/nginx-ingress-microk8s-controller   1         1         1       1            1           &lt;none&gt;                   22m
26
27NAMESPACE     NAME                                        READY   UP-TO-DATE   AVAILABLE   AGE
28kube-system   deployment.apps/calico-kube-controllers     1/1     1            1           23m
29kube-system   deployment.apps/metrics-server              1/1     1            1           22m
30kube-system   deployment.apps/kubernetes-dashboard        1/1     1            1           22m
31kube-system   deployment.apps/dashboard-metrics-scraper   1/1     1            1           22m
32
33NAMESPACE     NAME                                                   DESIRED   CURRENT   READY   AGE
34kube-system   replicaset.apps/calico-kube-controllers-69d7f794d9     0         0         0       23m
35kube-system   replicaset.apps/calico-kube-controllers-f744bf684      1         1         1       23m
36kube-system   replicaset.apps/metrics-server-85df567dd8              1         1         1       22m
37kube-system   replicaset.apps/kubernetes-dashboard-59699458b         1         1         1       21m
38kube-system   replicaset.apps/dashboard-metrics-scraper-58d4977855   1         1         1       21m
39apiVersion: extensions/v1beta1
40kind: Ingress
41metadata:
42  annotations:
43    kubernetes.io/ingress.class: public
44    nginx.ingress.kubernetes.io/backend-protocol: &quot;HTTPS&quot;
45  name: dashboard
46  namespace: kube-system
47spec:
48  rules:
49  - host: main
50    http:
51      paths:
52      - backend:
53          serviceName: kubernetes-dashboard
54          servicePort: 443
55        path: /
56error: unable to recognize &quot;ingress.yaml&quot;: no matches for kind &quot;Ingress&quot; in version &quot;extensions/v1beta1&quot;
57apiVersion: networking.k8s.io/v1
58kind: Ingress
59metadata:
60  name: dashboard
61  namespace: kube-system
62spec:
63  rules:
64  - http:
65      paths:
66      - path: /dashboard
67        pathType: Prefix
68        backend:
69          service:
70            name: kubernetes-dashboard
71            port:
72              number: 443
73NAMESPACE     NAME        CLASS    HOSTS   ADDRESS     PORTS   AGE
74kube-system   dashboard   public   *       127.0.0.1   80      11m
75

However, when I access the dashboard through http://<ip-of-kubernetes-master>/dashboard, I get a 400 error.

Log from the ingress controller:

1127.0.0.1 localhost
2127.0.1.1 main
3addons:
4  enabled:
5    dashboard            # The Kubernetes dashboard
6    ha-cluster           # Configure high availability on the current node
7    ingress              # Ingress controller for external access
8    metrics-server       # K8s Metrics Server for API access to service metrics
9NAMESPACE     NAME                                             READY   STATUS    RESTARTS   AGE
10kube-system   pod/calico-node-2jltr                            1/1     Running   0          23m
11kube-system   pod/calico-kube-controllers-f744bf684-d77hv      1/1     Running   0          23m
12kube-system   pod/metrics-server-85df567dd8-jd6gj              1/1     Running   0          22m
13kube-system   pod/kubernetes-dashboard-59699458b-pb5jb         1/1     Running   0          21m
14kube-system   pod/dashboard-metrics-scraper-58d4977855-94nsp   1/1     Running   0          21m
15ingress       pod/nginx-ingress-microk8s-controller-qf5pm      1/1     Running   0          21m
16
17NAMESPACE     NAME                                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
18default       service/kubernetes                  ClusterIP   10.152.183.1     &lt;none&gt;        443/TCP    23m
19kube-system   service/metrics-server              ClusterIP   10.152.183.81    &lt;none&gt;        443/TCP    22m
20kube-system   service/kubernetes-dashboard        ClusterIP   10.152.183.103   &lt;none&gt;        443/TCP    22m
21kube-system   service/dashboard-metrics-scraper   ClusterIP   10.152.183.197   &lt;none&gt;        8000/TCP   22m
22
23NAMESPACE     NAME                                               DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
24kube-system   daemonset.apps/calico-node                         1         1         1       1            1           kubernetes.io/os=linux   23m
25ingress       daemonset.apps/nginx-ingress-microk8s-controller   1         1         1       1            1           &lt;none&gt;                   22m
26
27NAMESPACE     NAME                                        READY   UP-TO-DATE   AVAILABLE   AGE
28kube-system   deployment.apps/calico-kube-controllers     1/1     1            1           23m
29kube-system   deployment.apps/metrics-server              1/1     1            1           22m
30kube-system   deployment.apps/kubernetes-dashboard        1/1     1            1           22m
31kube-system   deployment.apps/dashboard-metrics-scraper   1/1     1            1           22m
32
33NAMESPACE     NAME                                                   DESIRED   CURRENT   READY   AGE
34kube-system   replicaset.apps/calico-kube-controllers-69d7f794d9     0         0         0       23m
35kube-system   replicaset.apps/calico-kube-controllers-f744bf684      1         1         1       23m
36kube-system   replicaset.apps/metrics-server-85df567dd8              1         1         1       22m
37kube-system   replicaset.apps/kubernetes-dashboard-59699458b         1         1         1       21m
38kube-system   replicaset.apps/dashboard-metrics-scraper-58d4977855   1         1         1       21m
39apiVersion: extensions/v1beta1
40kind: Ingress
41metadata:
42  annotations:
43    kubernetes.io/ingress.class: public
44    nginx.ingress.kubernetes.io/backend-protocol: &quot;HTTPS&quot;
45  name: dashboard
46  namespace: kube-system
47spec:
48  rules:
49  - host: main
50    http:
51      paths:
52      - backend:
53          serviceName: kubernetes-dashboard
54          servicePort: 443
55        path: /
56error: unable to recognize &quot;ingress.yaml&quot;: no matches for kind &quot;Ingress&quot; in version &quot;extensions/v1beta1&quot;
57apiVersion: networking.k8s.io/v1
58kind: Ingress
59metadata:
60  name: dashboard
61  namespace: kube-system
62spec:
63  rules:
64  - http:
65      paths:
66      - path: /dashboard
67        pathType: Prefix
68        backend:
69          service:
70            name: kubernetes-dashboard
71            port:
72              number: 443
73NAMESPACE     NAME        CLASS    HOSTS   ADDRESS     PORTS   AGE
74kube-system   dashboard   public   *       127.0.0.1   80      11m
75192.168.0.123 - - [10/Oct/2021:21:38:47 +0000] &quot;GET /dashboard HTTP/1.1&quot; 400 54 &quot;-&quot; &quot;Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.71 Safari/537.36&quot; 466 0.002 [kube-system-kubernetes-dashboard-443] [] 10.1.76.3:8443 48 0.000 400 ca0946230759edfbaaf9d94f3d5c959a
76

Does the dashboard also need to be exposed using the microk8s proxy? I thought the ingress controller would take care of this, or did I misunderstand this?

ANSWER

Answered 2021-Oct-10 at 18:29
1127.0.0.1 localhost
2127.0.1.1 main
3addons:
4  enabled:
5    dashboard            # The Kubernetes dashboard
6    ha-cluster           # Configure high availability on the current node
7    ingress              # Ingress controller for external access
8    metrics-server       # K8s Metrics Server for API access to service metrics
9NAMESPACE     NAME                                             READY   STATUS    RESTARTS   AGE
10kube-system   pod/calico-node-2jltr                            1/1     Running   0          23m
11kube-system   pod/calico-kube-controllers-f744bf684-d77hv      1/1     Running   0          23m
12kube-system   pod/metrics-server-85df567dd8-jd6gj              1/1     Running   0          22m
13kube-system   pod/kubernetes-dashboard-59699458b-pb5jb         1/1     Running   0          21m
14kube-system   pod/dashboard-metrics-scraper-58d4977855-94nsp   1/1     Running   0          21m
15ingress       pod/nginx-ingress-microk8s-controller-qf5pm      1/1     Running   0          21m
16
17NAMESPACE     NAME                                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
18default       service/kubernetes                  ClusterIP   10.152.183.1     &lt;none&gt;        443/TCP    23m
19kube-system   service/metrics-server              ClusterIP   10.152.183.81    &lt;none&gt;        443/TCP    22m
20kube-system   service/kubernetes-dashboard        ClusterIP   10.152.183.103   &lt;none&gt;        443/TCP    22m
21kube-system   service/dashboard-metrics-scraper   ClusterIP   10.152.183.197   &lt;none&gt;        8000/TCP   22m
22
23NAMESPACE     NAME                                               DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
24kube-system   daemonset.apps/calico-node                         1         1         1       1            1           kubernetes.io/os=linux   23m
25ingress       daemonset.apps/nginx-ingress-microk8s-controller   1         1         1       1            1           &lt;none&gt;                   22m
26
27NAMESPACE     NAME                                        READY   UP-TO-DATE   AVAILABLE   AGE
28kube-system   deployment.apps/calico-kube-controllers     1/1     1            1           23m
29kube-system   deployment.apps/metrics-server              1/1     1            1           22m
30kube-system   deployment.apps/kubernetes-dashboard        1/1     1            1           22m
31kube-system   deployment.apps/dashboard-metrics-scraper   1/1     1            1           22m
32
33NAMESPACE     NAME                                                   DESIRED   CURRENT   READY   AGE
34kube-system   replicaset.apps/calico-kube-controllers-69d7f794d9     0         0         0       23m
35kube-system   replicaset.apps/calico-kube-controllers-f744bf684      1         1         1       23m
36kube-system   replicaset.apps/metrics-server-85df567dd8              1         1         1       22m
37kube-system   replicaset.apps/kubernetes-dashboard-59699458b         1         1         1       21m
38kube-system   replicaset.apps/dashboard-metrics-scraper-58d4977855   1         1         1       21m
39apiVersion: extensions/v1beta1
40kind: Ingress
41metadata:
42  annotations:
43    kubernetes.io/ingress.class: public
44    nginx.ingress.kubernetes.io/backend-protocol: &quot;HTTPS&quot;
45  name: dashboard
46  namespace: kube-system
47spec:
48  rules:
49  - host: main
50    http:
51      paths:
52      - backend:
53          serviceName: kubernetes-dashboard
54          servicePort: 443
55        path: /
56error: unable to recognize &quot;ingress.yaml&quot;: no matches for kind &quot;Ingress&quot; in version &quot;extensions/v1beta1&quot;
57apiVersion: networking.k8s.io/v1
58kind: Ingress
59metadata:
60  name: dashboard
61  namespace: kube-system
62spec:
63  rules:
64  - http:
65      paths:
66      - path: /dashboard
67        pathType: Prefix
68        backend:
69          service:
70            name: kubernetes-dashboard
71            port:
72              number: 443
73NAMESPACE     NAME        CLASS    HOSTS   ADDRESS     PORTS   AGE
74kube-system   dashboard   public   *       127.0.0.1   80      11m
75192.168.0.123 - - [10/Oct/2021:21:38:47 +0000] &quot;GET /dashboard HTTP/1.1&quot; 400 54 &quot;-&quot; &quot;Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.71 Safari/537.36&quot; 466 0.002 [kube-system-kubernetes-dashboard-443] [] 10.1.76.3:8443 48 0.000 400 ca0946230759edfbaaf9d94f3d5c959a
76error: unable to recognize &quot;ingress.yaml&quot;: no matches for kind &quot;Ingress&quot; in version &quot;extensions/v1beta1&quot;
77

it' due to the mismatch in the ingress API version.

You are running the v1.22.2 while API version in YAML is old.

Good example : https://kubernetes.io/docs/tasks/access-application-cluster/ingress-minikube/

you are using the older ingress API version in your YAML which is extensions/v1beta1.

You need to change this based on ingress version and K8s version you are running.

This is for version 1.19 in K8s and will work in 1.22 also

Example :

1127.0.0.1 localhost
2127.0.1.1 main
3addons:
4  enabled:
5    dashboard            # The Kubernetes dashboard
6    ha-cluster           # Configure high availability on the current node
7    ingress              # Ingress controller for external access
8    metrics-server       # K8s Metrics Server for API access to service metrics
9NAMESPACE     NAME                                             READY   STATUS    RESTARTS   AGE
10kube-system   pod/calico-node-2jltr                            1/1     Running   0          23m
11kube-system   pod/calico-kube-controllers-f744bf684-d77hv      1/1     Running   0          23m
12kube-system   pod/metrics-server-85df567dd8-jd6gj              1/1     Running   0          22m
13kube-system   pod/kubernetes-dashboard-59699458b-pb5jb         1/1     Running   0          21m
14kube-system   pod/dashboard-metrics-scraper-58d4977855-94nsp   1/1     Running   0          21m
15ingress       pod/nginx-ingress-microk8s-controller-qf5pm      1/1     Running   0          21m
16
17NAMESPACE     NAME                                TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
18default       service/kubernetes                  ClusterIP   10.152.183.1     &lt;none&gt;        443/TCP    23m
19kube-system   service/metrics-server              ClusterIP   10.152.183.81    &lt;none&gt;        443/TCP    22m
20kube-system   service/kubernetes-dashboard        ClusterIP   10.152.183.103   &lt;none&gt;        443/TCP    22m
21kube-system   service/dashboard-metrics-scraper   ClusterIP   10.152.183.197   &lt;none&gt;        8000/TCP   22m
22
23NAMESPACE     NAME                                               DESIRED   CURRENT   READY   UP-TO-DATE   AVAILABLE   NODE SELECTOR            AGE
24kube-system   daemonset.apps/calico-node                         1         1         1       1            1           kubernetes.io/os=linux   23m
25ingress       daemonset.apps/nginx-ingress-microk8s-controller   1         1         1       1            1           &lt;none&gt;                   22m
26
27NAMESPACE     NAME                                        READY   UP-TO-DATE   AVAILABLE   AGE
28kube-system   deployment.apps/calico-kube-controllers     1/1     1            1           23m
29kube-system   deployment.apps/metrics-server              1/1     1            1           22m
30kube-system   deployment.apps/kubernetes-dashboard        1/1     1            1           22m
31kube-system   deployment.apps/dashboard-metrics-scraper   1/1     1            1           22m
32
33NAMESPACE     NAME                                                   DESIRED   CURRENT   READY   AGE
34kube-system   replicaset.apps/calico-kube-controllers-69d7f794d9     0         0         0       23m
35kube-system   replicaset.apps/calico-kube-controllers-f744bf684      1         1         1       23m
36kube-system   replicaset.apps/metrics-server-85df567dd8              1         1         1       22m
37kube-system   replicaset.apps/kubernetes-dashboard-59699458b         1         1         1       21m
38kube-system   replicaset.apps/dashboard-metrics-scraper-58d4977855   1         1         1       21m
39apiVersion: extensions/v1beta1
40kind: Ingress
41metadata:
42  annotations:
43    kubernetes.io/ingress.class: public
44    nginx.ingress.kubernetes.io/backend-protocol: &quot;HTTPS&quot;
45  name: dashboard
46  namespace: kube-system
47spec:
48  rules:
49  - host: main
50    http:
51      paths:
52      - backend:
53          serviceName: kubernetes-dashboard
54          servicePort: 443
55        path: /
56error: unable to recognize &quot;ingress.yaml&quot;: no matches for kind &quot;Ingress&quot; in version &quot;extensions/v1beta1&quot;
57apiVersion: networking.k8s.io/v1
58kind: Ingress
59metadata:
60  name: dashboard
61  namespace: kube-system
62spec:
63  rules:
64  - http:
65      paths:
66      - path: /dashboard
67        pathType: Prefix
68        backend:
69          service:
70            name: kubernetes-dashboard
71            port:
72              number: 443
73NAMESPACE     NAME        CLASS    HOSTS   ADDRESS     PORTS   AGE
74kube-system   dashboard   public   *       127.0.0.1   80      11m
75192.168.0.123 - - [10/Oct/2021:21:38:47 +0000] &quot;GET /dashboard HTTP/1.1&quot; 400 54 &quot;-&quot; &quot;Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/94.0.4606.71 Safari/537.36&quot; 466 0.002 [kube-system-kubernetes-dashboard-443] [] 10.1.76.3:8443 48 0.000 400 ca0946230759edfbaaf9d94f3d5c959a
76error: unable to recognize &quot;ingress.yaml&quot;: no matches for kind &quot;Ingress&quot; in version &quot;extensions/v1beta1&quot;
77apiVersion: networking.k8s.io/v1
78kind: Ingress
79metadata:
80  name: minimal-ingress
81  annotations:
82    nginx.ingress.kubernetes.io/rewrite-target: /
83spec:
84  rules:
85  - http:
86      paths:
87      - path: /testpath
88        pathType: Prefix
89        backend:
90          service:
91            name: test
92            port:
93              number: 80
94

Source https://stackoverflow.com/questions/69517855

QUESTION

kubernetes dashboard (web ui) has nothing to display

Asked 2022-Mar-28 at 13:46

After I deployed the webui (k8s dashboard), I logined to the dashboard but nothing found there, instead a list of errors in notification.

1tatefulsets.apps is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:default&quot; cannot list resource &quot;statefulsets&quot; in API group &quot;apps&quot; in the namespace &quot;default&quot; 2 minutes ago
2error
3replicationcontrollers is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:default&quot; cannot list resource &quot;replicationcontrollers&quot; in API group &quot;&quot; in the namespace &quot;default&quot; 2 minutes ago
4error
5replicasets.apps is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:default&quot; cannot list resource &quot;replicasets&quot; in API group &quot;apps&quot; in the namespace &quot;default&quot; 2 minutes ago
6error
7deployments.apps is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:default&quot; cannot list resource &quot;deployments&quot; in API group &quot;apps&quot; in the namespace &quot;default&quot; 2 minutes ago
8error
9jobs.batch is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:default&quot; cannot list resource &quot;jobs&quot; in API group &quot;batch&quot; in the namespace &quot;default&quot; 2 minutes ago
10error
11events is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:default&quot; cannot list resource &quot;events&quot; in API group &quot;&quot; in the namespace &quot;default&quot; 2 minutes ago
12error
13pods is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:default&quot; cannot list resource &quot;pods&quot; in API group &quot;&quot; in the namespace &quot;default&quot; 2 minutes ago
14error
15daemonsets.apps is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:default&quot; cannot list resource &quot;daemonsets&quot; in API group &quot;apps&quot; in the namespace &quot;default&quot; 2 minutes ago
16error
17cronjobs.batch is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:default&quot; cannot list resource &quot;cronjobs&quot; in API group &quot;batch&quot; in the namespace &quot;default&quot; 2 minutes ago
18error
19namespaces is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:default&quot; cannot list resource &quot;namespaces&quot; in API group &quot;&quot; at the cluster scope
20

Here is all my pods

1tatefulsets.apps is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:default&quot; cannot list resource &quot;statefulsets&quot; in API group &quot;apps&quot; in the namespace &quot;default&quot; 2 minutes ago
2error
3replicationcontrollers is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:default&quot; cannot list resource &quot;replicationcontrollers&quot; in API group &quot;&quot; in the namespace &quot;default&quot; 2 minutes ago
4error
5replicasets.apps is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:default&quot; cannot list resource &quot;replicasets&quot; in API group &quot;apps&quot; in the namespace &quot;default&quot; 2 minutes ago
6error
7deployments.apps is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:default&quot; cannot list resource &quot;deployments&quot; in API group &quot;apps&quot; in the namespace &quot;default&quot; 2 minutes ago
8error
9jobs.batch is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:default&quot; cannot list resource &quot;jobs&quot; in API group &quot;batch&quot; in the namespace &quot;default&quot; 2 minutes ago
10error
11events is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:default&quot; cannot list resource &quot;events&quot; in API group &quot;&quot; in the namespace &quot;default&quot; 2 minutes ago
12error
13pods is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:default&quot; cannot list resource &quot;pods&quot; in API group &quot;&quot; in the namespace &quot;default&quot; 2 minutes ago
14error
15daemonsets.apps is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:default&quot; cannot list resource &quot;daemonsets&quot; in API group &quot;apps&quot; in the namespace &quot;default&quot; 2 minutes ago
16error
17cronjobs.batch is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:default&quot; cannot list resource &quot;cronjobs&quot; in API group &quot;batch&quot; in the namespace &quot;default&quot; 2 minutes ago
18error
19namespaces is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:default&quot; cannot list resource &quot;namespaces&quot; in API group &quot;&quot; at the cluster scope
20NAMESPACE              NAME                                         READY   STATUS    RESTARTS   AGE   IP               NODE             NOMINATED NODE   READINESS GATES
21kube-system            calico-kube-controllers-58497c65d5-828dm     1/1     Running   0          64m   10.244.192.193   master-node1     &lt;none&gt;           &lt;none&gt;
22kube-system            calico-node-dblzp                            1/1     Running   0          17m   157.245.57.140   cluster3-node1   &lt;none&gt;           &lt;none&gt;
23kube-system            calico-node-dwdvh                            1/1     Running   1          49m   157.245.57.139   cluster2-node2   &lt;none&gt;           &lt;none&gt;
24kube-system            calico-node-gskr2                            1/1     Running   0          17m   157.245.57.133   cluster1-node2   &lt;none&gt;           &lt;none&gt;
25kube-system            calico-node-jm5rd                            1/1     Running   0          17m   157.245.57.144   cluster4-node2   &lt;none&gt;           &lt;none&gt;
26kube-system            calico-node-m8htd                            1/1     Running   0          17m   157.245.57.141   cluster3-node2   &lt;none&gt;           &lt;none&gt;
27kube-system            calico-node-n7d44                            1/1     Running   0          64m   157.245.57.146   master-node1     &lt;none&gt;           &lt;none&gt;
28kube-system            calico-node-wblpr                            1/1     Running   0          17m   157.245.57.135   cluster2-node1   &lt;none&gt;           &lt;none&gt;
29kube-system            calico-node-wbrzf                            1/1     Running   1          29m   157.245.57.136   cluster1-node1   &lt;none&gt;           &lt;none&gt;
30kube-system            calico-node-wqwkj                            1/1     Running   0          17m   157.245.57.142   cluster4-node1   &lt;none&gt;           &lt;none&gt;
31kube-system            coredns-78fcd69978-cnzxv                     1/1     Running   0          64m   10.244.192.194   master-node1     &lt;none&gt;           &lt;none&gt;
32kube-system            coredns-78fcd69978-f4ln8                     1/1     Running   0          64m   10.244.192.195   master-node1     &lt;none&gt;           &lt;none&gt;
33kube-system            etcd-master-node1                            1/1     Running   1          64m   157.245.57.146   master-node1     &lt;none&gt;           &lt;none&gt;
34kube-system            kube-apiserver-master-node1                  1/1     Running   1          64m   157.245.57.146   master-node1     &lt;none&gt;           &lt;none&gt;
35kube-system            kube-controller-manager-master-node1         1/1     Running   1          64m   157.245.57.146   master-node1     &lt;none&gt;           &lt;none&gt;
36kube-system            kube-proxy-2b5bz                             1/1     Running   0          17m   157.245.57.144   cluster4-node2   &lt;none&gt;           &lt;none&gt;
37kube-system            kube-proxy-cslwc                             1/1     Running   3          49m   157.245.57.139   cluster2-node2   &lt;none&gt;           &lt;none&gt;
38kube-system            kube-proxy-hlvxc                             1/1     Running   0          17m   157.245.57.140   cluster3-node1   &lt;none&gt;           &lt;none&gt;
39kube-system            kube-proxy-kkdqn                             1/1     Running   0          17m   157.245.57.142   cluster4-node1   &lt;none&gt;           &lt;none&gt;
40kube-system            kube-proxy-sm7nq                             1/1     Running   0          17m   157.245.57.133   cluster1-node2   &lt;none&gt;           &lt;none&gt;
41kube-system            kube-proxy-wm42s                             1/1     Running   0          64m   157.245.57.146   master-node1     &lt;none&gt;           &lt;none&gt;
42kube-system            kube-proxy-wslxd                             1/1     Running   0          17m   157.245.57.141   cluster3-node2   &lt;none&gt;           &lt;none&gt;
43kube-system            kube-proxy-xnh24                             1/1     Running   0          17m   157.245.57.135   cluster2-node1   &lt;none&gt;           &lt;none&gt;
44kube-system            kube-proxy-zvsqf                             1/1     Running   1          29m   157.245.57.136   cluster1-node1   &lt;none&gt;           &lt;none&gt;
45kube-system            kube-scheduler-master-node1                  1/1     Running   1          64m   157.245.57.146   master-node1     &lt;none&gt;           &lt;none&gt;
46kubernetes-dashboard   dashboard-metrics-scraper-856586f554-c4thn   1/1     Running   0          14m   10.244.14.65     cluster2-node2   &lt;none&gt;           &lt;none&gt;
47kubernetes-dashboard   kubernetes-dashboard-67484c44f6-hwvj5        1/1     Running   0          14m   10.244.213.65    cluster1-node1   &lt;none&gt;           &lt;none&gt;
48
49

Here is all my nodes:

1tatefulsets.apps is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:default&quot; cannot list resource &quot;statefulsets&quot; in API group &quot;apps&quot; in the namespace &quot;default&quot; 2 minutes ago
2error
3replicationcontrollers is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:default&quot; cannot list resource &quot;replicationcontrollers&quot; in API group &quot;&quot; in the namespace &quot;default&quot; 2 minutes ago
4error
5replicasets.apps is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:default&quot; cannot list resource &quot;replicasets&quot; in API group &quot;apps&quot; in the namespace &quot;default&quot; 2 minutes ago
6error
7deployments.apps is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:default&quot; cannot list resource &quot;deployments&quot; in API group &quot;apps&quot; in the namespace &quot;default&quot; 2 minutes ago
8error
9jobs.batch is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:default&quot; cannot list resource &quot;jobs&quot; in API group &quot;batch&quot; in the namespace &quot;default&quot; 2 minutes ago
10error
11events is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:default&quot; cannot list resource &quot;events&quot; in API group &quot;&quot; in the namespace &quot;default&quot; 2 minutes ago
12error
13pods is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:default&quot; cannot list resource &quot;pods&quot; in API group &quot;&quot; in the namespace &quot;default&quot; 2 minutes ago
14error
15daemonsets.apps is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:default&quot; cannot list resource &quot;daemonsets&quot; in API group &quot;apps&quot; in the namespace &quot;default&quot; 2 minutes ago
16error
17cronjobs.batch is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:default&quot; cannot list resource &quot;cronjobs&quot; in API group &quot;batch&quot; in the namespace &quot;default&quot; 2 minutes ago
18error
19namespaces is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:default&quot; cannot list resource &quot;namespaces&quot; in API group &quot;&quot; at the cluster scope
20NAMESPACE              NAME                                         READY   STATUS    RESTARTS   AGE   IP               NODE             NOMINATED NODE   READINESS GATES
21kube-system            calico-kube-controllers-58497c65d5-828dm     1/1     Running   0          64m   10.244.192.193   master-node1     &lt;none&gt;           &lt;none&gt;
22kube-system            calico-node-dblzp                            1/1     Running   0          17m   157.245.57.140   cluster3-node1   &lt;none&gt;           &lt;none&gt;
23kube-system            calico-node-dwdvh                            1/1     Running   1          49m   157.245.57.139   cluster2-node2   &lt;none&gt;           &lt;none&gt;
24kube-system            calico-node-gskr2                            1/1     Running   0          17m   157.245.57.133   cluster1-node2   &lt;none&gt;           &lt;none&gt;
25kube-system            calico-node-jm5rd                            1/1     Running   0          17m   157.245.57.144   cluster4-node2   &lt;none&gt;           &lt;none&gt;
26kube-system            calico-node-m8htd                            1/1     Running   0          17m   157.245.57.141   cluster3-node2   &lt;none&gt;           &lt;none&gt;
27kube-system            calico-node-n7d44                            1/1     Running   0          64m   157.245.57.146   master-node1     &lt;none&gt;           &lt;none&gt;
28kube-system            calico-node-wblpr                            1/1     Running   0          17m   157.245.57.135   cluster2-node1   &lt;none&gt;           &lt;none&gt;
29kube-system            calico-node-wbrzf                            1/1     Running   1          29m   157.245.57.136   cluster1-node1   &lt;none&gt;           &lt;none&gt;
30kube-system            calico-node-wqwkj                            1/1     Running   0          17m   157.245.57.142   cluster4-node1   &lt;none&gt;           &lt;none&gt;
31kube-system            coredns-78fcd69978-cnzxv                     1/1     Running   0          64m   10.244.192.194   master-node1     &lt;none&gt;           &lt;none&gt;
32kube-system            coredns-78fcd69978-f4ln8                     1/1     Running   0          64m   10.244.192.195   master-node1     &lt;none&gt;           &lt;none&gt;
33kube-system            etcd-master-node1                            1/1     Running   1          64m   157.245.57.146   master-node1     &lt;none&gt;           &lt;none&gt;
34kube-system            kube-apiserver-master-node1                  1/1     Running   1          64m   157.245.57.146   master-node1     &lt;none&gt;           &lt;none&gt;
35kube-system            kube-controller-manager-master-node1         1/1     Running   1          64m   157.245.57.146   master-node1     &lt;none&gt;           &lt;none&gt;
36kube-system            kube-proxy-2b5bz                             1/1     Running   0          17m   157.245.57.144   cluster4-node2   &lt;none&gt;           &lt;none&gt;
37kube-system            kube-proxy-cslwc                             1/1     Running   3          49m   157.245.57.139   cluster2-node2   &lt;none&gt;           &lt;none&gt;
38kube-system            kube-proxy-hlvxc                             1/1     Running   0          17m   157.245.57.140   cluster3-node1   &lt;none&gt;           &lt;none&gt;
39kube-system            kube-proxy-kkdqn                             1/1     Running   0          17m   157.245.57.142   cluster4-node1   &lt;none&gt;           &lt;none&gt;
40kube-system            kube-proxy-sm7nq                             1/1     Running   0          17m   157.245.57.133   cluster1-node2   &lt;none&gt;           &lt;none&gt;
41kube-system            kube-proxy-wm42s                             1/1     Running   0          64m   157.245.57.146   master-node1     &lt;none&gt;           &lt;none&gt;
42kube-system            kube-proxy-wslxd                             1/1     Running   0          17m   157.245.57.141   cluster3-node2   &lt;none&gt;           &lt;none&gt;
43kube-system            kube-proxy-xnh24                             1/1     Running   0          17m   157.245.57.135   cluster2-node1   &lt;none&gt;           &lt;none&gt;
44kube-system            kube-proxy-zvsqf                             1/1     Running   1          29m   157.245.57.136   cluster1-node1   &lt;none&gt;           &lt;none&gt;
45kube-system            kube-scheduler-master-node1                  1/1     Running   1          64m   157.245.57.146   master-node1     &lt;none&gt;           &lt;none&gt;
46kubernetes-dashboard   dashboard-metrics-scraper-856586f554-c4thn   1/1     Running   0          14m   10.244.14.65     cluster2-node2   &lt;none&gt;           &lt;none&gt;
47kubernetes-dashboard   kubernetes-dashboard-67484c44f6-hwvj5        1/1     Running   0          14m   10.244.213.65    cluster1-node1   &lt;none&gt;           &lt;none&gt;
48
49NAME             STATUS   ROLES                  AGE   VERSION
50cluster1-node1   Ready    &lt;none&gt;                 29m   v1.22.1
51cluster1-node2   Ready    &lt;none&gt;                 17m   v1.22.1
52cluster2-node1   Ready    &lt;none&gt;                 17m   v1.22.1
53cluster2-node2   Ready    &lt;none&gt;                 49m   v1.22.1
54cluster3-node1   Ready    &lt;none&gt;                 17m   v1.22.1
55cluster3-node2   Ready    &lt;none&gt;                 17m   v1.22.1
56cluster4-node1   Ready    &lt;none&gt;                 17m   v1.22.1
57cluster4-node2   Ready    &lt;none&gt;                 17m   v1.22.1
58master-node1     Ready    control-plane,master   65m   v1.22.1
59

I suspect there is misconfiguration in kubernetes-dashboard namespace, so it cannot access the system.

ANSWER

Answered 2021-Aug-24 at 14:00

I have recreated the situation according to the attached tutorial and it works for me. Make sure, that you are trying properly login:

To protect your cluster data, Dashboard deploys with a minimal RBAC configuration by default. Currently, Dashboard only supports logging in with a Bearer Token. To create a token for this demo, you can follow our guide on creating a sample user.

Warning: The sample user created in the tutorial will have administrative privileges and is for educational purposes only.

You can also create admin role:

1tatefulsets.apps is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:default&quot; cannot list resource &quot;statefulsets&quot; in API group &quot;apps&quot; in the namespace &quot;default&quot; 2 minutes ago
2error
3replicationcontrollers is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:default&quot; cannot list resource &quot;replicationcontrollers&quot; in API group &quot;&quot; in the namespace &quot;default&quot; 2 minutes ago
4error
5replicasets.apps is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:default&quot; cannot list resource &quot;replicasets&quot; in API group &quot;apps&quot; in the namespace &quot;default&quot; 2 minutes ago
6error
7deployments.apps is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:default&quot; cannot list resource &quot;deployments&quot; in API group &quot;apps&quot; in the namespace &quot;default&quot; 2 minutes ago
8error
9jobs.batch is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:default&quot; cannot list resource &quot;jobs&quot; in API group &quot;batch&quot; in the namespace &quot;default&quot; 2 minutes ago
10error
11events is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:default&quot; cannot list resource &quot;events&quot; in API group &quot;&quot; in the namespace &quot;default&quot; 2 minutes ago
12error
13pods is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:default&quot; cannot list resource &quot;pods&quot; in API group &quot;&quot; in the namespace &quot;default&quot; 2 minutes ago
14error
15daemonsets.apps is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:default&quot; cannot list resource &quot;daemonsets&quot; in API group &quot;apps&quot; in the namespace &quot;default&quot; 2 minutes ago
16error
17cronjobs.batch is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:default&quot; cannot list resource &quot;cronjobs&quot; in API group &quot;batch&quot; in the namespace &quot;default&quot; 2 minutes ago
18error
19namespaces is forbidden: User &quot;system:serviceaccount:kubernetes-dashboard:default&quot; cannot list resource &quot;namespaces&quot; in API group &quot;&quot; at the cluster scope
20NAMESPACE              NAME                                         READY   STATUS    RESTARTS   AGE   IP               NODE             NOMINATED NODE   READINESS GATES
21kube-system            calico-kube-controllers-58497c65d5-828dm     1/1     Running   0          64m   10.244.192.193   master-node1     &lt;none&gt;           &lt;none&gt;
22kube-system            calico-node-dblzp                            1/1     Running   0          17m   157.245.57.140   cluster3-node1   &lt;none&gt;           &lt;none&gt;
23kube-system            calico-node-dwdvh                            1/1     Running   1          49m   157.245.57.139   cluster2-node2   &lt;none&gt;           &lt;none&gt;
24kube-system            calico-node-gskr2                            1/1     Running   0          17m   157.245.57.133   cluster1-node2   &lt;none&gt;           &lt;none&gt;
25kube-system            calico-node-jm5rd                            1/1     Running   0          17m   157.245.57.144   cluster4-node2   &lt;none&gt;           &lt;none&gt;
26kube-system            calico-node-m8htd                            1/1     Running   0          17m   157.245.57.141   cluster3-node2   &lt;none&gt;           &lt;none&gt;
27kube-system            calico-node-n7d44                            1/1     Running   0          64m   157.245.57.146   master-node1     &lt;none&gt;           &lt;none&gt;
28kube-system            calico-node-wblpr                            1/1     Running   0          17m   157.245.57.135   cluster2-node1   &lt;none&gt;           &lt;none&gt;
29kube-system            calico-node-wbrzf                            1/1     Running   1          29m   157.245.57.136   cluster1-node1   &lt;none&gt;           &lt;none&gt;
30kube-system            calico-node-wqwkj                            1/1     Running   0          17m   157.245.57.142   cluster4-node1   &lt;none&gt;           &lt;none&gt;
31kube-system            coredns-78fcd69978-cnzxv                     1/1     Running   0          64m   10.244.192.194   master-node1     &lt;none&gt;           &lt;none&gt;
32kube-system            coredns-78fcd69978-f4ln8                     1/1     Running   0          64m   10.244.192.195   master-node1     &lt;none&gt;           &lt;none&gt;
33kube-system            etcd-master-node1                            1/1     Running   1          64m   157.245.57.146   master-node1     &lt;none&gt;           &lt;none&gt;
34kube-system            kube-apiserver-master-node1                  1/1     Running   1          64m   157.245.57.146   master-node1     &lt;none&gt;           &lt;none&gt;
35kube-system            kube-controller-manager-master-node1         1/1     Running   1          64m   157.245.57.146   master-node1     &lt;none&gt;           &lt;none&gt;
36kube-system            kube-proxy-2b5bz                             1/1     Running   0          17m   157.245.57.144   cluster4-node2   &lt;none&gt;           &lt;none&gt;
37kube-system            kube-proxy-cslwc                             1/1     Running   3          49m   157.245.57.139   cluster2-node2   &lt;none&gt;           &lt;none&gt;
38kube-system            kube-proxy-hlvxc                             1/1     Running   0          17m   157.245.57.140   cluster3-node1   &lt;none&gt;           &lt;none&gt;
39kube-system            kube-proxy-kkdqn                             1/1     Running   0          17m   157.245.57.142   cluster4-node1   &lt;none&gt;           &lt;none&gt;
40kube-system            kube-proxy-sm7nq                             1/1     Running   0          17m   157.245.57.133   cluster1-node2   &lt;none&gt;           &lt;none&gt;
41kube-system            kube-proxy-wm42s                             1/1     Running   0          64m   157.245.57.146   master-node1     &lt;none&gt;           &lt;none&gt;
42kube-system            kube-proxy-wslxd                             1/1     Running   0          17m   157.245.57.141   cluster3-node2   &lt;none&gt;           &lt;none&gt;
43kube-system            kube-proxy-xnh24                             1/1     Running   0          17m   157.245.57.135   cluster2-node1   &lt;none&gt;           &lt;none&gt;
44kube-system            kube-proxy-zvsqf                             1/1     Running   1          29m   157.245.57.136   cluster1-node1   &lt;none&gt;           &lt;none&gt;
45kube-system            kube-scheduler-master-node1                  1/1     Running   1          64m   157.245.57.146   master-node1     &lt;none&gt;           &lt;none&gt;
46kubernetes-dashboard   dashboard-metrics-scraper-856586f554-c4thn   1/1     Running   0          14m   10.244.14.65     cluster2-node2   &lt;none&gt;           &lt;none&gt;
47kubernetes-dashboard   kubernetes-dashboard-67484c44f6-hwvj5        1/1     Running   0          14m   10.244.213.65    cluster1-node1   &lt;none&gt;           &lt;none&gt;
48
49NAME             STATUS   ROLES                  AGE   VERSION
50cluster1-node1   Ready    &lt;none&gt;                 29m   v1.22.1
51cluster1-node2   Ready    &lt;none&gt;                 17m   v1.22.1
52cluster2-node1   Ready    &lt;none&gt;                 17m   v1.22.1
53cluster2-node2   Ready    &lt;none&gt;                 49m   v1.22.1
54cluster3-node1   Ready    &lt;none&gt;                 17m   v1.22.1
55cluster3-node2   Ready    &lt;none&gt;                 17m   v1.22.1
56cluster4-node1   Ready    &lt;none&gt;                 17m   v1.22.1
57cluster4-node2   Ready    &lt;none&gt;                 17m   v1.22.1
58master-node1     Ready    control-plane,master   65m   v1.22.1
59kubectl create clusterrolebinding serviceaccounts-cluster-admin \
60  --clusterrole=cluster-admin \
61  --group=system:serviceaccounts
62

However, you need to know that this is potentially a very dangerous solution as you are granting root permissions to create pods for every user who has read secrets. You should use this method only for learning and demonstrating purpose.

You can read more about this solution here and more about RBAC authorization.

See also this question.

Source https://stackoverflow.com/questions/68885798

QUESTION

Python Selenium AWS Lambda Change WebGL Vendor/Renderer For Undetectable Headless Scraper

Asked 2022-Mar-21 at 20:19
Concept:

Using AWS Lambda functions with Python and Selenium, I want to create a undetectable headless chrome scraper by passing a headless chrome test. I check the undetectability of my headless scraper by opening up the test and taking a screenshot. I ran this test on a Local IDE and on a Lambda server.


Implementation:

I will be using a python library called selenium-stealth and will follow their basic configuration:

1stealth(driver,
2        languages=[&quot;en-US&quot;, &quot;en&quot;],
3        vendor=&quot;Google Inc.&quot;,
4        platform=&quot;Win32&quot;,
5        webgl_vendor=&quot;Intel Inc.&quot;,
6        renderer=&quot;Intel Iris OpenGL Engine&quot;,
7        fix_hairline=True,
8        )
9

I implemented this configuration on a Local IDE as well as an AWS Lambda Server to compare the results.


Local IDE:

Found below are the test results running on a local IDE: enter image description here


Lambda Server:

When I run this on a Lambda server, both the WebGL Vendor and Renderer are blank. as shown below:

enter image description here

I even tried to manually change the WebGL Vendor/Renderer using the following JavaScript command:

1stealth(driver,
2        languages=[&quot;en-US&quot;, &quot;en&quot;],
3        vendor=&quot;Google Inc.&quot;,
4        platform=&quot;Win32&quot;,
5        webgl_vendor=&quot;Intel Inc.&quot;,
6        renderer=&quot;Intel Iris OpenGL Engine&quot;,
7        fix_hairline=True,
8        )
9driver.execute_cdp_cmd('Page.addScriptToEvaluateOnNewDocument', {&quot;source&quot;: &quot;WebGLRenderingContext.prototype.getParameter = function(parameter) {if (parameter === 37445) {return 'VENDOR_INPUT';}if (parameter === 37446) {return 'RENDERER_INPUT';}return getParameter(parameter);};&quot;})
10

Then I thought maybe that it could be something wrong with the parameter number. I configured the command execution without the if statement, but the same thing happened: It worked on my Local IDE but had no effect on an AWS Lambda Server.

Simply Put:

Is it possible to add Vendor/Renderer on AWS Lambda? In my efforts, it seems that there is no possible way. I made sure to submit this issue on the selenium-stealth GitHub Repository.

ANSWER

Answered 2021-Dec-18 at 02:01
WebGL

WebGL is a cross-platform, open web standard for a low-level 3D graphics API based on OpenGL ES, exposed to ECMAScript via the HTML5 Canvas element. WebGL at it's core is a Shader-based API using GLSL, with constructs that are semantically similar to those of the underlying OpenGL ES API. It follows the OpenGL ES specification, with some exceptions for the out of memory-managed languages such as JavaScript. WebGL 1.0 exposes the OpenGL ES 2.0 feature set; WebGL 2.0 exposes the OpenGL ES 3.0 API.

Now, with the availability of Selenium Stealth building of Undetectable Scraper using Selenium driven ChromeDriver initiated Browsing Context have become much more easier.


selenium-stealth

selenium-stealth is a python package selenium-stealth to prevent detection. This programme tries to make python selenium more stealthy. However, as of now selenium-stealth only support Selenium Chrome.

  • Code Block:

1stealth(driver,
2        languages=[&quot;en-US&quot;, &quot;en&quot;],
3        vendor=&quot;Google Inc.&quot;,
4        platform=&quot;Win32&quot;,
5        webgl_vendor=&quot;Intel Inc.&quot;,
6        renderer=&quot;Intel Iris OpenGL Engine&quot;,
7        fix_hairline=True,
8        )
9driver.execute_cdp_cmd('Page.addScriptToEvaluateOnNewDocument', {&quot;source&quot;: &quot;WebGLRenderingContext.prototype.getParameter = function(parameter) {if (parameter === 37445) {return 'VENDOR_INPUT';}if (parameter === 37446) {return 'RENDERER_INPUT';}return getParameter(parameter);};&quot;})
10from selenium import webdriver
11from selenium.webdriver.chrome.options import Options
12from selenium.webdriver.chrome.service import Service
13from selenium_stealth import stealth
14
15options = Options()
16options.add_argument(&quot;start-maximized&quot;)
17options.add_experimental_option(&quot;excludeSwitches&quot;, [&quot;enable-automation&quot;])
18options.add_experimental_option('useAutomationExtension', False)
19s = Service('C:\\BrowserDrivers\\chromedriver.exe')
20driver = webdriver.Chrome(service=s, options=options)
21
22# Selenium Stealth settings
23stealth(driver,
24      languages=[&quot;en-US&quot;, &quot;en&quot;],
25      vendor=&quot;Google Inc.&quot;,
26      platform=&quot;Win32&quot;,
27      webgl_vendor=&quot;Intel Inc.&quot;,
28      renderer=&quot;Intel Iris OpenGL Engine&quot;,
29      fix_hairline=True,
30  )
31
32driver.get(&quot;https://bot.sannysoft.com/&quot;)
33
  • Browser Screenshot:

  • bot_sannysoft

    You can find a detailed relevant discussion in Can a website detect when you are using Selenium with chromedriver?


    Changing WebGL Vendor/Renderer in AWS Lambda

    AWS Lambda enables us to deliver compressed WebGL websites to end users. When requested webpage objects are compressed, the transfer size is reduced, leading to faster downloads, lower cloud storage fees, and lower data transfer fees. Improved load times also directly influence the viewer experience and retention, which helps in improving website conversion and discoverability. Using WebGL, websites are more immersive while still being accessible via a browser URL. Through this technique AWS Lambda to automatically compress the objects uploaded to S3.

    product-page-diagram_Lambda-RealTimeFileProcessing.a59577de4b6471674a540b878b0b684e0249a18c

    Background on compression and WebGL

    HTTP compression is a capability that can be built into web servers and web clients to improve transfer speed and bandwidth utilization. This capability is negotiated between the server and the client using an HTTP header which may indicate that a resource being transferred, cached, or otherwise referenced is compressed. AWS Lambda on the server-side supports Content-Encoding header.

    On the client-side, most browsers today support brotli and gzip compression through HTTP headers (Accept-Encoding: deflate, br, gzip) and can handle server response headers. This means browsers will automatically download and decompress content from a web server at the client-side, before rendering webpages to the viewer.


    Conclusion

    Due to this constraint you may not be able to change the WebGL Vendor/Renderer in AWS Lambda, else it may directly affect the process of rendering webpages to the viewers and can stand out to be a bottleneck in UX.


    tl; dr

    You can find a couple of relevant detailed discussion in:

    Source https://stackoverflow.com/questions/70265306

    QUESTION

    Enable use of images from the local library on Kubernetes

    Asked 2022-Mar-20 at 13:23

    I'm following a tutorial https://docs.openfaas.com/tutorials/first-python-function/,

    currently, I have the right image

    1$ docker images | grep hello-openfaas
    2wm/hello-openfaas                                     latest                          bd08d01ce09b   34 minutes ago      65.2MB
    3$ faas-cli deploy -f ./hello-openfaas.yml 
    4Deploying: hello-openfaas.
    5WARNING! You are not using an encrypted connection to the gateway, consider using HTTPS.
    6
    7Deployed. 202 Accepted.
    8URL: http://IP:8099/function/hello-openfaas
    9

    there is a step that forewarns me to do some setup(My case is I'm using Kubernetes and minikube and don't want to push to a remote container registry, I should enable the use of images from the local library on Kubernetes.), I see the hints

    1$ docker images | grep hello-openfaas
    2wm/hello-openfaas                                     latest                          bd08d01ce09b   34 minutes ago      65.2MB
    3$ faas-cli deploy -f ./hello-openfaas.yml 
    4Deploying: hello-openfaas.
    5WARNING! You are not using an encrypted connection to the gateway, consider using HTTPS.
    6
    7Deployed. 202 Accepted.
    8URL: http://IP:8099/function/hello-openfaas
    9see the helm chart for how to set the ImagePullPolicy
    10

    I'm not sure how to configure it correctly. the final result indicates I failed.

    Unsurprisingly, I couldn't access the function service, I find some clues in https://docs.openfaas.com/deployment/troubleshooting/#openfaas-didnt-start which might help to diagnose the problem.

    1$ docker images | grep hello-openfaas
    2wm/hello-openfaas                                     latest                          bd08d01ce09b   34 minutes ago      65.2MB
    3$ faas-cli deploy -f ./hello-openfaas.yml 
    4Deploying: hello-openfaas.
    5WARNING! You are not using an encrypted connection to the gateway, consider using HTTPS.
    6
    7Deployed. 202 Accepted.
    8URL: http://IP:8099/function/hello-openfaas
    9see the helm chart for how to set the ImagePullPolicy
    10$ kubectl logs -n openfaas-fn deploy/hello-openfaas
    11Error from server (BadRequest): container &quot;hello-openfaas&quot; in pod &quot;hello-openfaas-558f99477f-wd697&quot; is waiting to start: trying and failing to pull image
    12
    13$ kubectl describe -n openfaas-fn deploy/hello-openfaas
    14Name:                   hello-openfaas
    15Namespace:              openfaas-fn
    16CreationTimestamp:      Wed, 16 Mar 2022 14:59:49 +0800
    17Labels:                 faas_function=hello-openfaas
    18Annotations:            deployment.kubernetes.io/revision: 1
    19                        prometheus.io.scrape: false
    20Selector:               faas_function=hello-openfaas
    21Replicas:               1 desired | 1 updated | 1 total | 0 available | 1 unavailable
    22StrategyType:           RollingUpdate
    23MinReadySeconds:        0
    24RollingUpdateStrategy:  0 max unavailable, 1 max surge
    25Pod Template:
    26  Labels:       faas_function=hello-openfaas
    27  Annotations:  prometheus.io.scrape: false
    28  Containers:
    29   hello-openfaas:
    30    Image:      wm/hello-openfaas:latest
    31    Port:       8080/TCP
    32    Host Port:  0/TCP
    33    Liveness:   http-get http://:8080/_/health delay=2s timeout=1s period=2s #success=1 #failure=3
    34    Readiness:  http-get http://:8080/_/health delay=2s timeout=1s period=2s #success=1 #failure=3
    35    Environment:
    36      fprocess:  python3 index.py
    37    Mounts:      &lt;none&gt;
    38  Volumes:       &lt;none&gt;
    39Conditions:
    40  Type           Status  Reason
    41  ----           ------  ------
    42  Available      False   MinimumReplicasUnavailable
    43  Progressing    False   ProgressDeadlineExceeded
    44OldReplicaSets:  &lt;none&gt;
    45NewReplicaSet:   hello-openfaas-558f99477f (1/1 replicas created)
    46Events:
    47  Type    Reason             Age   From                   Message
    48  ----    ------             ----  ----                   -------
    49  Normal  ScalingReplicaSet  29m   deployment-controller  Scaled up replica set hello-openfaas-558f99477f to 1
    50

    hello-openfaas.yml

    1$ docker images | grep hello-openfaas
    2wm/hello-openfaas                                     latest                          bd08d01ce09b   34 minutes ago      65.2MB
    3$ faas-cli deploy -f ./hello-openfaas.yml 
    4Deploying: hello-openfaas.
    5WARNING! You are not using an encrypted connection to the gateway, consider using HTTPS.
    6
    7Deployed. 202 Accepted.
    8URL: http://IP:8099/function/hello-openfaas
    9see the helm chart for how to set the ImagePullPolicy
    10$ kubectl logs -n openfaas-fn deploy/hello-openfaas
    11Error from server (BadRequest): container &quot;hello-openfaas&quot; in pod &quot;hello-openfaas-558f99477f-wd697&quot; is waiting to start: trying and failing to pull image
    12
    13$ kubectl describe -n openfaas-fn deploy/hello-openfaas
    14Name:                   hello-openfaas
    15Namespace:              openfaas-fn
    16CreationTimestamp:      Wed, 16 Mar 2022 14:59:49 +0800
    17Labels:                 faas_function=hello-openfaas
    18Annotations:            deployment.kubernetes.io/revision: 1
    19                        prometheus.io.scrape: false
    20Selector:               faas_function=hello-openfaas
    21Replicas:               1 desired | 1 updated | 1 total | 0 available | 1 unavailable
    22StrategyType:           RollingUpdate
    23MinReadySeconds:        0
    24RollingUpdateStrategy:  0 max unavailable, 1 max surge
    25Pod Template:
    26  Labels:       faas_function=hello-openfaas
    27  Annotations:  prometheus.io.scrape: false
    28  Containers:
    29   hello-openfaas:
    30    Image:      wm/hello-openfaas:latest
    31    Port:       8080/TCP
    32    Host Port:  0/TCP
    33    Liveness:   http-get http://:8080/_/health delay=2s timeout=1s period=2s #success=1 #failure=3
    34    Readiness:  http-get http://:8080/_/health delay=2s timeout=1s period=2s #success=1 #failure=3
    35    Environment:
    36      fprocess:  python3 index.py
    37    Mounts:      &lt;none&gt;
    38  Volumes:       &lt;none&gt;
    39Conditions:
    40  Type           Status  Reason
    41  ----           ------  ------
    42  Available      False   MinimumReplicasUnavailable
    43  Progressing    False   ProgressDeadlineExceeded
    44OldReplicaSets:  &lt;none&gt;
    45NewReplicaSet:   hello-openfaas-558f99477f (1/1 replicas created)
    46Events:
    47  Type    Reason             Age   From                   Message
    48  ----    ------             ----  ----                   -------
    49  Normal  ScalingReplicaSet  29m   deployment-controller  Scaled up replica set hello-openfaas-558f99477f to 1
    50version: 1.0
    51provider:
    52  name: openfaas
    53  gateway: http://IP:8099
    54functions:
    55  hello-openfaas:
    56    lang: python3
    57    handler: ./hello-openfaas
    58    image: wm/hello-openfaas:latest
    59    imagePullPolicy: Never
    60

    I create a new project hello-openfaas2 to reproduce this error

    1$ docker images | grep hello-openfaas
    2wm/hello-openfaas                                     latest                          bd08d01ce09b   34 minutes ago      65.2MB
    3$ faas-cli deploy -f ./hello-openfaas.yml 
    4Deploying: hello-openfaas.
    5WARNING! You are not using an encrypted connection to the gateway, consider using HTTPS.
    6
    7Deployed. 202 Accepted.
    8URL: http://IP:8099/function/hello-openfaas
    9see the helm chart for how to set the ImagePullPolicy
    10$ kubectl logs -n openfaas-fn deploy/hello-openfaas
    11Error from server (BadRequest): container &quot;hello-openfaas&quot; in pod &quot;hello-openfaas-558f99477f-wd697&quot; is waiting to start: trying and failing to pull image
    12
    13$ kubectl describe -n openfaas-fn deploy/hello-openfaas
    14Name:                   hello-openfaas
    15Namespace:              openfaas-fn
    16CreationTimestamp:      Wed, 16 Mar 2022 14:59:49 +0800
    17Labels:                 faas_function=hello-openfaas
    18Annotations:            deployment.kubernetes.io/revision: 1
    19                        prometheus.io.scrape: false
    20Selector:               faas_function=hello-openfaas
    21Replicas:               1 desired | 1 updated | 1 total | 0 available | 1 unavailable
    22StrategyType:           RollingUpdate
    23MinReadySeconds:        0
    24RollingUpdateStrategy:  0 max unavailable, 1 max surge
    25Pod Template:
    26  Labels:       faas_function=hello-openfaas
    27  Annotations:  prometheus.io.scrape: false
    28  Containers:
    29   hello-openfaas:
    30    Image:      wm/hello-openfaas:latest
    31    Port:       8080/TCP
    32    Host Port:  0/TCP
    33    Liveness:   http-get http://:8080/_/health delay=2s timeout=1s period=2s #success=1 #failure=3
    34    Readiness:  http-get http://:8080/_/health delay=2s timeout=1s period=2s #success=1 #failure=3
    35    Environment:
    36      fprocess:  python3 index.py
    37    Mounts:      &lt;none&gt;
    38  Volumes:       &lt;none&gt;
    39Conditions:
    40  Type           Status  Reason
    41  ----           ------  ------
    42  Available      False   MinimumReplicasUnavailable
    43  Progressing    False   ProgressDeadlineExceeded
    44OldReplicaSets:  &lt;none&gt;
    45NewReplicaSet:   hello-openfaas-558f99477f (1/1 replicas created)
    46Events:
    47  Type    Reason             Age   From                   Message
    48  ----    ------             ----  ----                   -------
    49  Normal  ScalingReplicaSet  29m   deployment-controller  Scaled up replica set hello-openfaas-558f99477f to 1
    50version: 1.0
    51provider:
    52  name: openfaas
    53  gateway: http://IP:8099
    54functions:
    55  hello-openfaas:
    56    lang: python3
    57    handler: ./hello-openfaas
    58    image: wm/hello-openfaas:latest
    59    imagePullPolicy: Never
    60$ faas-cli new --lang python3 hello-openfaas2 --prefix=&quot;wm&quot;
    61Folder: hello-openfaas2 created.
    62# I add `imagePullPolicy: Never` to `hello-openfaas2.yml`
    63$ faas-cli build -f ./hello-openfaas2.yml 
    64$ faas-cli deploy -f ./hello-openfaas2.yml 
    65Deploying: hello-openfaas2.
    66WARNING! You are not using an encrypted connection to the gateway, consider using HTTPS.
    67
    68Deployed. 202 Accepted.
    69URL: http://192.168.1.3:8099/function/hello-openfaas2
    70
    71
    72$ kubectl logs -n openfaas-fn deploy/hello-openfaas2
    73Error from server (BadRequest): container &quot;hello-openfaas2&quot; in pod &quot;hello-openfaas2-7c67488865-7d7vm&quot; is waiting to start: image can't be pulled
    74
    75$ kubectl get pods --all-namespaces
    76NAMESPACE              NAME                                        READY   STATUS             RESTARTS         AGE
    77kube-system            coredns-64897985d-kp7vf                     1/1     Running            0                47h
    78...
    79openfaas-fn            env-6c79f7b946-bzbtm                        1/1     Running            0                4h28m
    80openfaas-fn            figlet-54db496f88-957xl                     1/1     Running            0                18h
    81openfaas-fn            hello-openfaas-547857b9d6-z277c             0/1     ImagePullBackOff   0                127m
    82openfaas-fn            hello-openfaas-7b6946b4f9-hcvq4             0/1     ImagePullBackOff   0                165m
    83openfaas-fn            hello-openfaas2-7c67488865-qmrkl            0/1     ImagePullBackOff   0                13m
    84openfaas-fn            hello-openfaas3-65847b8b67-b94kd            0/1     ImagePullBackOff   0                97m
    85openfaas-fn            hello-python-554b464498-zxcdv               0/1     ErrImagePull       0                3h23m
    86openfaas-fn            hello-python-8698bc68bd-62gh9               0/1     ImagePullBackOff   0                3h25m
    87

    from https://docs.openfaas.com/reference/yaml/, I know I put the imagePullPolicy in the wrong place, there is no such keyword in its schema.

    I also tried eval $(minikube docker-env and still get the same error.


    I've a feeling that faas-cli deploy can be replace by helm, they all mean to run the image(whether from remote or local) in Kubernetes cluster, then I can use helm chart to setup the pullPolicy there. Even though the detail is not still clear to me, This discovery inspires me.


    So far, after eval $(minikube docker-env)

    1$ docker images | grep hello-openfaas
    2wm/hello-openfaas                                     latest                          bd08d01ce09b   34 minutes ago      65.2MB
    3$ faas-cli deploy -f ./hello-openfaas.yml 
    4Deploying: hello-openfaas.
    5WARNING! You are not using an encrypted connection to the gateway, consider using HTTPS.
    6
    7Deployed. 202 Accepted.
    8URL: http://IP:8099/function/hello-openfaas
    9see the helm chart for how to set the ImagePullPolicy
    10$ kubectl logs -n openfaas-fn deploy/hello-openfaas
    11Error from server (BadRequest): container &quot;hello-openfaas&quot; in pod &quot;hello-openfaas-558f99477f-wd697&quot; is waiting to start: trying and failing to pull image
    12
    13$ kubectl describe -n openfaas-fn deploy/hello-openfaas
    14Name:                   hello-openfaas
    15Namespace:              openfaas-fn
    16CreationTimestamp:      Wed, 16 Mar 2022 14:59:49 +0800
    17Labels:                 faas_function=hello-openfaas
    18Annotations:            deployment.kubernetes.io/revision: 1
    19                        prometheus.io.scrape: false
    20Selector:               faas_function=hello-openfaas
    21Replicas:               1 desired | 1 updated | 1 total | 0 available | 1 unavailable
    22StrategyType:           RollingUpdate
    23MinReadySeconds:        0
    24RollingUpdateStrategy:  0 max unavailable, 1 max surge
    25Pod Template:
    26  Labels:       faas_function=hello-openfaas
    27  Annotations:  prometheus.io.scrape: false
    28  Containers:
    29   hello-openfaas:
    30    Image:      wm/hello-openfaas:latest
    31    Port:       8080/TCP
    32    Host Port:  0/TCP
    33    Liveness:   http-get http://:8080/_/health delay=2s timeout=1s period=2s #success=1 #failure=3
    34    Readiness:  http-get http://:8080/_/health delay=2s timeout=1s period=2s #success=1 #failure=3
    35    Environment:
    36      fprocess:  python3 index.py
    37    Mounts:      &lt;none&gt;
    38  Volumes:       &lt;none&gt;
    39Conditions:
    40  Type           Status  Reason
    41  ----           ------  ------
    42  Available      False   MinimumReplicasUnavailable
    43  Progressing    False   ProgressDeadlineExceeded
    44OldReplicaSets:  &lt;none&gt;
    45NewReplicaSet:   hello-openfaas-558f99477f (1/1 replicas created)
    46Events:
    47  Type    Reason             Age   From                   Message
    48  ----    ------             ----  ----                   -------
    49  Normal  ScalingReplicaSet  29m   deployment-controller  Scaled up replica set hello-openfaas-558f99477f to 1
    50version: 1.0
    51provider:
    52  name: openfaas
    53  gateway: http://IP:8099
    54functions:
    55  hello-openfaas:
    56    lang: python3
    57    handler: ./hello-openfaas
    58    image: wm/hello-openfaas:latest
    59    imagePullPolicy: Never
    60$ faas-cli new --lang python3 hello-openfaas2 --prefix=&quot;wm&quot;
    61Folder: hello-openfaas2 created.
    62# I add `imagePullPolicy: Never` to `hello-openfaas2.yml`
    63$ faas-cli build -f ./hello-openfaas2.yml 
    64$ faas-cli deploy -f ./hello-openfaas2.yml 
    65Deploying: hello-openfaas2.
    66WARNING! You are not using an encrypted connection to the gateway, consider using HTTPS.
    67
    68Deployed. 202 Accepted.
    69URL: http://192.168.1.3:8099/function/hello-openfaas2
    70
    71
    72$ kubectl logs -n openfaas-fn deploy/hello-openfaas2
    73Error from server (BadRequest): container &quot;hello-openfaas2&quot; in pod &quot;hello-openfaas2-7c67488865-7d7vm&quot; is waiting to start: image can't be pulled
    74
    75$ kubectl get pods --all-namespaces
    76NAMESPACE              NAME                                        READY   STATUS             RESTARTS         AGE
    77kube-system            coredns-64897985d-kp7vf                     1/1     Running            0                47h
    78...
    79openfaas-fn            env-6c79f7b946-bzbtm                        1/1     Running            0                4h28m
    80openfaas-fn            figlet-54db496f88-957xl                     1/1     Running            0                18h
    81openfaas-fn            hello-openfaas-547857b9d6-z277c             0/1     ImagePullBackOff   0                127m
    82openfaas-fn            hello-openfaas-7b6946b4f9-hcvq4             0/1     ImagePullBackOff   0                165m
    83openfaas-fn            hello-openfaas2-7c67488865-qmrkl            0/1     ImagePullBackOff   0                13m
    84openfaas-fn            hello-openfaas3-65847b8b67-b94kd            0/1     ImagePullBackOff   0                97m
    85openfaas-fn            hello-python-554b464498-zxcdv               0/1     ErrImagePull       0                3h23m
    86openfaas-fn            hello-python-8698bc68bd-62gh9               0/1     ImagePullBackOff   0                3h25m
    87$ docker images
    88REPOSITORY                                TAG        IMAGE ID       CREATED             SIZE
    89wm/hello-openfaas2                        0.1        03c21bd96d5e   About an hour ago   65.2MB
    90python                                    3-alpine   69fba17b9bae   12 days ago         48.6MB
    91ghcr.io/openfaas/figlet                   latest     ca5eef0de441   2 weeks ago         14.8MB
    92ghcr.io/openfaas/alpine                   latest     35f3d4be6bb8   2 weeks ago         14.2MB
    93ghcr.io/openfaas/faas-netes               0.14.2     524b510505ec   3 weeks ago         77.3MB
    94k8s.gcr.io/kube-apiserver                 v1.23.3    f40be0088a83   7 weeks ago         135MB
    95k8s.gcr.io/kube-controller-manager        v1.23.3    b07520cd7ab7   7 weeks ago         125MB
    96k8s.gcr.io/kube-scheduler                 v1.23.3    99a3486be4f2   7 weeks ago         53.5MB
    97k8s.gcr.io/kube-proxy                     v1.23.3    9b7cc9982109   7 weeks ago         112MB
    98ghcr.io/openfaas/gateway                  0.21.3     ab4851262cd1   7 weeks ago         30.6MB
    99ghcr.io/openfaas/basic-auth               0.21.3     16e7168a17a3   7 weeks ago         14.3MB
    100k8s.gcr.io/etcd                           3.5.1-0    25f8c7f3da61   4 months ago        293MB
    101ghcr.io/openfaas/classic-watchdog         0.2.0      6f97aa96da81   4 months ago        8.18MB
    102k8s.gcr.io/coredns/coredns                v1.8.6     a4ca41631cc7   5 months ago        46.8MB
    103k8s.gcr.io/pause                          3.6        6270bb605e12   6 months ago        683kB
    104ghcr.io/openfaas/queue-worker             0.12.2     56e7216201bc   7 months ago        7.97MB
    105kubernetesui/dashboard                    v2.3.1     e1482a24335a   9 months ago        220MB
    106kubernetesui/metrics-scraper              v1.0.7     7801cfc6d5c0   9 months ago        34.4MB
    107nats-streaming                            0.22.0     12f2d32e0c9a   9 months ago        19.8MB
    108gcr.io/k8s-minikube/storage-provisioner   v5         6e38f40d628d   11 months ago       31.5MB
    109functions/markdown-render                 latest     93b5da182216   2 years ago         24.6MB
    110functions/hubstats                        latest     01affa91e9e4   2 years ago         29.3MB
    111functions/nodeinfo                        latest     2fe8a87bf79c   2 years ago         71.4MB
    112functions/alpine                          latest     46c6f6d74471   2 years ago         21.5MB
    113prom/prometheus                           v2.11.0    b97ed892eb23   2 years ago         126MB
    114prom/alertmanager                         v0.18.0    ce3c87f17369   2 years ago         51.9MB
    115alexellis2/openfaas-colorization          0.4.1      d36b67b1b5c1   2 years ago         1.84GB
    116rorpage/text-to-speech                    latest     5dc20810eb54   2 years ago         86.9MB
    117stefanprodan/faas-grafana                 4.6.3      2a4bd9caea50   4 years ago         284MB
    118
    119$ kubectl get pods --all-namespaces
    120NAMESPACE              NAME                                        READY   STATUS             RESTARTS        AGE
    121kube-system            coredns-64897985d-kp7vf                     1/1     Running            0               6d
    122kube-system            etcd-minikube                               1/1     Running            0               6d
    123kube-system            kube-apiserver-minikube                     1/1     Running            0               6d
    124kube-system            kube-controller-manager-minikube            1/1     Running            0               6d
    125kube-system            kube-proxy-5m8lr                            1/1     Running            0               6d
    126kube-system            kube-scheduler-minikube                     1/1     Running            0               6d
    127kube-system            storage-provisioner                         1/1     Running            1 (6d ago)      6d
    128kubernetes-dashboard   dashboard-metrics-scraper-58549894f-97tsv   1/1     Running            0               5d7h
    129kubernetes-dashboard   kubernetes-dashboard-ccd587f44-lkwcx        1/1     Running            0               5d7h
    130openfaas-fn            base64-6bdbcdb64c-djz8f                     1/1     Running            0               5d1h
    131openfaas-fn            colorise-85c74c686b-2fz66                   1/1     Running            0               4d5h
    132openfaas-fn            echoit-5d7df6684c-k6ljn                     1/1     Running            0               5d1h
    133openfaas-fn            env-6c79f7b946-bzbtm                        1/1     Running            0               4d5h
    134openfaas-fn            figlet-54db496f88-957xl                     1/1     Running            0               4d19h
    135openfaas-fn            hello-openfaas-547857b9d6-z277c             0/1     ImagePullBackOff   0               4d3h
    136openfaas-fn            hello-openfaas-7b6946b4f9-hcvq4             0/1     ImagePullBackOff   0               4d3h
    137openfaas-fn            hello-openfaas2-5c6f6cb5d9-24hkz            0/1     ImagePullBackOff   0               9m22s
    138openfaas-fn            hello-openfaas2-8957bb47b-7cgjg             0/1     ImagePullBackOff   0               2d22h
    139openfaas-fn            hello-openfaas3-65847b8b67-b94kd            0/1     ImagePullBackOff   0               4d2h
    140openfaas-fn            hello-python-6d6976845f-cwsln               0/1     ImagePullBackOff   0               3d19h
    141openfaas-fn            hello-python-b577cb8dc-64wf5                0/1     ImagePullBackOff   0               3d9h
    142openfaas-fn            hubstats-b6cd4dccc-z8tvl                    1/1     Running            0               5d1h
    143openfaas-fn            markdown-68f69f47c8-w5m47                   1/1     Running            0               5d1h
    144openfaas-fn            nodeinfo-d48cbbfcc-hfj79                    1/1     Running            0               5d1h
    145openfaas-fn            openfaas2-fun                               1/1     Running            0               15s
    146openfaas-fn            text-to-speech-74ffcdfd7-997t4              0/1     CrashLoopBackOff   2235 (3s ago)   4d5h
    147openfaas-fn            wordcount-6489865566-cvfzr                  1/1     Running            0               5d1h
    148openfaas               alertmanager-88449c789-fq2rg                1/1     Running            0               3d1h
    149openfaas               basic-auth-plugin-75fd7d69c5-zw4jh          1/1     Running            0               3d2h
    150openfaas               gateway-5c4bb7c5d7-n8h27                    2/2     Running            0               3d2h
    151openfaas               grafana                                     1/1     Running            0               4d8h
    152openfaas               nats-647b476664-hkr7p                       1/1     Running            0               3d2h
    153openfaas               prometheus-687648749f-tl8jp                 1/1     Running            0               3d1h
    154openfaas               queue-worker-7777ffd7f6-htx6t               1/1     Running            0               3d2h
    155
    156
    157$ kubectl get -o yaml -n openfaas-fn deploy/hello-openfaas2
    158apiVersion: apps/v1
    159kind: Deployment
    160metadata:
    161  annotations:
    162    deployment.kubernetes.io/revision: &quot;6&quot;
    163    prometheus.io.scrape: &quot;false&quot;
    164  creationTimestamp: &quot;2022-03-17T12:47:35Z&quot;
    165  generation: 6
    166  labels:
    167    faas_function: hello-openfaas2
    168  name: hello-openfaas2
    169  namespace: openfaas-fn
    170  resourceVersion: &quot;400833&quot;
    171  uid: 9c4e9d26-23af-4f93-8538-4e2d96f0d7e0
    172spec:
    173  progressDeadlineSeconds: 600
    174  replicas: 1
    175  revisionHistoryLimit: 10
    176  selector:
    177    matchLabels:
    178      faas_function: hello-openfaas2
    179  strategy:
    180    rollingUpdate:
    181      maxSurge: 1
    182      maxUnavailable: 0
    183    type: RollingUpdate
    184  template:
    185    metadata:
    186      annotations:
    187        prometheus.io.scrape: &quot;false&quot;
    188      creationTimestamp: null
    189      labels:
    190        faas_function: hello-openfaas2
    191        uid: &quot;969512830&quot;
    192      name: hello-openfaas2
    193    spec:
    194      containers:
    195      - env:
    196        - name: fprocess
    197          value: python3 index.py
    198        image: wm/hello-openfaas2:0.1
    199        imagePullPolicy: Always
    200        livenessProbe:
    201          failureThreshold: 3
    202          httpGet:
    203            path: /_/health
    204            port: 8080
    205            scheme: HTTP
    206          initialDelaySeconds: 2
    207          periodSeconds: 2
    208          successThreshold: 1
    209          timeoutSeconds: 1
    210        name: hello-openfaas2
    211        ports:
    212        - containerPort: 8080
    213          name: http
    214          protocol: TCP
    215        readinessProbe:
    216          failureThreshold: 3
    217          httpGet:
    218            path: /_/health
    219            port: 8080
    220            scheme: HTTP
    221          initialDelaySeconds: 2
    222          periodSeconds: 2
    223          successThreshold: 1
    224          timeoutSeconds: 1
    225        resources: {}
    226        securityContext:
    227          allowPrivilegeEscalation: false
    228          readOnlyRootFilesystem: false
    229        terminationMessagePath: /dev/termination-log
    230        terminationMessagePolicy: File
    231      dnsPolicy: ClusterFirst
    232      enableServiceLinks: false
    233      restartPolicy: Always
    234      schedulerName: default-scheduler
    235      securityContext: {}
    236      terminationGracePeriodSeconds: 30
    237status:
    238  conditions:
    239  - lastTransitionTime: &quot;2022-03-17T12:47:35Z&quot;
    240    lastUpdateTime: &quot;2022-03-17T12:47:35Z&quot;
    241    message: Deployment does not have minimum availability.
    242    reason: MinimumReplicasUnavailable
    243    status: &quot;False&quot;
    244    type: Available
    245  - lastTransitionTime: &quot;2022-03-20T12:16:56Z&quot;
    246    lastUpdateTime: &quot;2022-03-20T12:16:56Z&quot;
    247    message: ReplicaSet &quot;hello-openfaas2-5d6c7c7fb4&quot; has timed out progressing.
    248    reason: ProgressDeadlineExceeded
    249    status: &quot;False&quot;
    250    type: Progressing
    251  observedGeneration: 6
    252  replicas: 2
    253  unavailableReplicas: 2
    254  updatedReplicas: 1
    255

    In one shell,

    1$ docker images | grep hello-openfaas
    2wm/hello-openfaas                                     latest                          bd08d01ce09b   34 minutes ago      65.2MB
    3$ faas-cli deploy -f ./hello-openfaas.yml 
    4Deploying: hello-openfaas.
    5WARNING! You are not using an encrypted connection to the gateway, consider using HTTPS.
    6
    7Deployed. 202 Accepted.
    8URL: http://IP:8099/function/hello-openfaas
    9see the helm chart for how to set the ImagePullPolicy
    10$ kubectl logs -n openfaas-fn deploy/hello-openfaas
    11Error from server (BadRequest): container &quot;hello-openfaas&quot; in pod &quot;hello-openfaas-558f99477f-wd697&quot; is waiting to start: trying and failing to pull image
    12
    13$ kubectl describe -n openfaas-fn deploy/hello-openfaas
    14Name:                   hello-openfaas
    15Namespace:              openfaas-fn
    16CreationTimestamp:      Wed, 16 Mar 2022 14:59:49 +0800
    17Labels:                 faas_function=hello-openfaas
    18Annotations:            deployment.kubernetes.io/revision: 1
    19                        prometheus.io.scrape: false
    20Selector:               faas_function=hello-openfaas
    21Replicas:               1 desired | 1 updated | 1 total | 0 available | 1 unavailable
    22StrategyType:           RollingUpdate
    23MinReadySeconds:        0
    24RollingUpdateStrategy:  0 max unavailable, 1 max surge
    25Pod Template:
    26  Labels:       faas_function=hello-openfaas
    27  Annotations:  prometheus.io.scrape: false
    28  Containers:
    29   hello-openfaas:
    30    Image:      wm/hello-openfaas:latest
    31    Port:       8080/TCP
    32    Host Port:  0/TCP
    33    Liveness:   http-get http://:8080/_/health delay=2s timeout=1s period=2s #success=1 #failure=3
    34    Readiness:  http-get http://:8080/_/health delay=2s timeout=1s period=2s #success=1 #failure=3
    35    Environment:
    36      fprocess:  python3 index.py
    37    Mounts:      &lt;none&gt;
    38  Volumes:       &lt;none&gt;
    39Conditions:
    40  Type           Status  Reason
    41  ----           ------  ------
    42  Available      False   MinimumReplicasUnavailable
    43  Progressing    False   ProgressDeadlineExceeded
    44OldReplicaSets:  &lt;none&gt;
    45NewReplicaSet:   hello-openfaas-558f99477f (1/1 replicas created)
    46Events:
    47  Type    Reason             Age   From                   Message
    48  ----    ------             ----  ----                   -------
    49  Normal  ScalingReplicaSet  29m   deployment-controller  Scaled up replica set hello-openfaas-558f99477f to 1
    50version: 1.0
    51provider:
    52  name: openfaas
    53  gateway: http://IP:8099
    54functions:
    55  hello-openfaas:
    56    lang: python3
    57    handler: ./hello-openfaas
    58    image: wm/hello-openfaas:latest
    59    imagePullPolicy: Never
    60$ faas-cli new --lang python3 hello-openfaas2 --prefix=&quot;wm&quot;
    61Folder: hello-openfaas2 created.
    62# I add `imagePullPolicy: Never` to `hello-openfaas2.yml`
    63$ faas-cli build -f ./hello-openfaas2.yml 
    64$ faas-cli deploy -f ./hello-openfaas2.yml 
    65Deploying: hello-openfaas2.
    66WARNING! You are not using an encrypted connection to the gateway, consider using HTTPS.
    67
    68Deployed. 202 Accepted.
    69URL: http://192.168.1.3:8099/function/hello-openfaas2
    70
    71
    72$ kubectl logs -n openfaas-fn deploy/hello-openfaas2
    73Error from server (BadRequest): container &quot;hello-openfaas2&quot; in pod &quot;hello-openfaas2-7c67488865-7d7vm&quot; is waiting to start: image can't be pulled
    74
    75$ kubectl get pods --all-namespaces
    76NAMESPACE              NAME                                        READY   STATUS             RESTARTS         AGE
    77kube-system            coredns-64897985d-kp7vf                     1/1     Running            0                47h
    78...
    79openfaas-fn            env-6c79f7b946-bzbtm                        1/1     Running            0                4h28m
    80openfaas-fn            figlet-54db496f88-957xl                     1/1     Running            0                18h
    81openfaas-fn            hello-openfaas-547857b9d6-z277c             0/1     ImagePullBackOff   0                127m
    82openfaas-fn            hello-openfaas-7b6946b4f9-hcvq4             0/1     ImagePullBackOff   0                165m
    83openfaas-fn            hello-openfaas2-7c67488865-qmrkl            0/1     ImagePullBackOff   0                13m
    84openfaas-fn            hello-openfaas3-65847b8b67-b94kd            0/1     ImagePullBackOff   0                97m
    85openfaas-fn            hello-python-554b464498-zxcdv               0/1     ErrImagePull       0                3h23m
    86openfaas-fn            hello-python-8698bc68bd-62gh9               0/1     ImagePullBackOff   0                3h25m
    87$ docker images
    88REPOSITORY                                TAG        IMAGE ID       CREATED             SIZE
    89wm/hello-openfaas2                        0.1        03c21bd96d5e   About an hour ago   65.2MB
    90python                                    3-alpine   69fba17b9bae   12 days ago         48.6MB
    91ghcr.io/openfaas/figlet                   latest     ca5eef0de441   2 weeks ago         14.8MB
    92ghcr.io/openfaas/alpine                   latest     35f3d4be6bb8   2 weeks ago         14.2MB
    93ghcr.io/openfaas/faas-netes               0.14.2     524b510505ec   3 weeks ago         77.3MB
    94k8s.gcr.io/kube-apiserver                 v1.23.3    f40be0088a83   7 weeks ago         135MB
    95k8s.gcr.io/kube-controller-manager        v1.23.3    b07520cd7ab7   7 weeks ago         125MB
    96k8s.gcr.io/kube-scheduler                 v1.23.3    99a3486be4f2   7 weeks ago         53.5MB
    97k8s.gcr.io/kube-proxy                     v1.23.3    9b7cc9982109   7 weeks ago         112MB
    98ghcr.io/openfaas/gateway                  0.21.3     ab4851262cd1   7 weeks ago         30.6MB
    99ghcr.io/openfaas/basic-auth               0.21.3     16e7168a17a3   7 weeks ago         14.3MB
    100k8s.gcr.io/etcd                           3.5.1-0    25f8c7f3da61   4 months ago        293MB
    101ghcr.io/openfaas/classic-watchdog         0.2.0      6f97aa96da81   4 months ago        8.18MB
    102k8s.gcr.io/coredns/coredns                v1.8.6     a4ca41631cc7   5 months ago        46.8MB
    103k8s.gcr.io/pause                          3.6        6270bb605e12   6 months ago        683kB
    104ghcr.io/openfaas/queue-worker             0.12.2     56e7216201bc   7 months ago        7.97MB
    105kubernetesui/dashboard                    v2.3.1     e1482a24335a   9 months ago        220MB
    106kubernetesui/metrics-scraper              v1.0.7     7801cfc6d5c0   9 months ago        34.4MB
    107nats-streaming                            0.22.0     12f2d32e0c9a   9 months ago        19.8MB
    108gcr.io/k8s-minikube/storage-provisioner   v5         6e38f40d628d   11 months ago       31.5MB
    109functions/markdown-render                 latest     93b5da182216   2 years ago         24.6MB
    110functions/hubstats                        latest     01affa91e9e4   2 years ago         29.3MB
    111functions/nodeinfo                        latest     2fe8a87bf79c   2 years ago         71.4MB
    112functions/alpine                          latest     46c6f6d74471   2 years ago         21.5MB
    113prom/prometheus                           v2.11.0    b97ed892eb23   2 years ago         126MB
    114prom/alertmanager                         v0.18.0    ce3c87f17369   2 years ago         51.9MB
    115alexellis2/openfaas-colorization          0.4.1      d36b67b1b5c1   2 years ago         1.84GB
    116rorpage/text-to-speech                    latest     5dc20810eb54   2 years ago         86.9MB
    117stefanprodan/faas-grafana                 4.6.3      2a4bd9caea50   4 years ago         284MB
    118
    119$ kubectl get pods --all-namespaces
    120NAMESPACE              NAME                                        READY   STATUS             RESTARTS        AGE
    121kube-system            coredns-64897985d-kp7vf                     1/1     Running            0               6d
    122kube-system            etcd-minikube                               1/1     Running            0               6d
    123kube-system            kube-apiserver-minikube                     1/1     Running            0               6d
    124kube-system            kube-controller-manager-minikube            1/1     Running            0               6d
    125kube-system            kube-proxy-5m8lr                            1/1     Running            0               6d
    126kube-system            kube-scheduler-minikube                     1/1     Running            0               6d
    127kube-system            storage-provisioner                         1/1     Running            1 (6d ago)      6d
    128kubernetes-dashboard   dashboard-metrics-scraper-58549894f-97tsv   1/1     Running            0               5d7h
    129kubernetes-dashboard   kubernetes-dashboard-ccd587f44-lkwcx        1/1     Running            0               5d7h
    130openfaas-fn            base64-6bdbcdb64c-djz8f                     1/1     Running            0               5d1h
    131openfaas-fn            colorise-85c74c686b-2fz66                   1/1     Running            0               4d5h
    132openfaas-fn            echoit-5d7df6684c-k6ljn                     1/1     Running            0               5d1h
    133openfaas-fn            env-6c79f7b946-bzbtm                        1/1     Running            0               4d5h
    134openfaas-fn            figlet-54db496f88-957xl                     1/1     Running            0               4d19h
    135openfaas-fn            hello-openfaas-547857b9d6-z277c             0/1     ImagePullBackOff   0               4d3h
    136openfaas-fn            hello-openfaas-7b6946b4f9-hcvq4             0/1     ImagePullBackOff   0               4d3h
    137openfaas-fn            hello-openfaas2-5c6f6cb5d9-24hkz            0/1     ImagePullBackOff   0               9m22s
    138openfaas-fn            hello-openfaas2-8957bb47b-7cgjg             0/1     ImagePullBackOff   0               2d22h
    139openfaas-fn            hello-openfaas3-65847b8b67-b94kd            0/1     ImagePullBackOff   0               4d2h
    140openfaas-fn            hello-python-6d6976845f-cwsln               0/1     ImagePullBackOff   0               3d19h
    141openfaas-fn            hello-python-b577cb8dc-64wf5                0/1     ImagePullBackOff   0               3d9h
    142openfaas-fn            hubstats-b6cd4dccc-z8tvl                    1/1     Running            0               5d1h
    143openfaas-fn            markdown-68f69f47c8-w5m47                   1/1     Running            0               5d1h
    144openfaas-fn            nodeinfo-d48cbbfcc-hfj79                    1/1     Running            0               5d1h
    145openfaas-fn            openfaas2-fun                               1/1     Running            0               15s
    146openfaas-fn            text-to-speech-74ffcdfd7-997t4              0/1     CrashLoopBackOff   2235 (3s ago)   4d5h
    147openfaas-fn            wordcount-6489865566-cvfzr                  1/1     Running            0               5d1h
    148openfaas               alertmanager-88449c789-fq2rg                1/1     Running            0               3d1h
    149openfaas               basic-auth-plugin-75fd7d69c5-zw4jh          1/1     Running            0               3d2h
    150openfaas               gateway-5c4bb7c5d7-n8h27                    2/2     Running            0               3d2h
    151openfaas               grafana                                     1/1     Running            0               4d8h
    152openfaas               nats-647b476664-hkr7p                       1/1     Running            0               3d2h
    153openfaas               prometheus-687648749f-tl8jp                 1/1     Running            0               3d1h
    154openfaas               queue-worker-7777ffd7f6-htx6t               1/1     Running            0               3d2h
    155
    156
    157$ kubectl get -o yaml -n openfaas-fn deploy/hello-openfaas2
    158apiVersion: apps/v1
    159kind: Deployment
    160metadata:
    161  annotations:
    162    deployment.kubernetes.io/revision: &quot;6&quot;
    163    prometheus.io.scrape: &quot;false&quot;
    164  creationTimestamp: &quot;2022-03-17T12:47:35Z&quot;
    165  generation: 6
    166  labels:
    167    faas_function: hello-openfaas2
    168  name: hello-openfaas2
    169  namespace: openfaas-fn
    170  resourceVersion: &quot;400833&quot;
    171  uid: 9c4e9d26-23af-4f93-8538-4e2d96f0d7e0
    172spec:
    173  progressDeadlineSeconds: 600
    174  replicas: 1
    175  revisionHistoryLimit: 10
    176  selector:
    177    matchLabels:
    178      faas_function: hello-openfaas2
    179  strategy:
    180    rollingUpdate:
    181      maxSurge: 1
    182      maxUnavailable: 0
    183    type: RollingUpdate
    184  template:
    185    metadata:
    186      annotations:
    187        prometheus.io.scrape: &quot;false&quot;
    188      creationTimestamp: null
    189      labels:
    190        faas_function: hello-openfaas2
    191        uid: &quot;969512830&quot;
    192      name: hello-openfaas2
    193    spec:
    194      containers:
    195      - env:
    196        - name: fprocess
    197          value: python3 index.py
    198        image: wm/hello-openfaas2:0.1
    199        imagePullPolicy: Always
    200        livenessProbe:
    201          failureThreshold: 3
    202          httpGet:
    203            path: /_/health
    204            port: 8080
    205            scheme: HTTP
    206          initialDelaySeconds: 2
    207          periodSeconds: 2
    208          successThreshold: 1
    209          timeoutSeconds: 1
    210        name: hello-openfaas2
    211        ports:
    212        - containerPort: 8080
    213          name: http
    214          protocol: TCP
    215        readinessProbe:
    216          failureThreshold: 3
    217          httpGet:
    218            path: /_/health
    219            port: 8080
    220            scheme: HTTP
    221          initialDelaySeconds: 2
    222          periodSeconds: 2
    223          successThreshold: 1
    224          timeoutSeconds: 1
    225        resources: {}
    226        securityContext:
    227          allowPrivilegeEscalation: false
    228          readOnlyRootFilesystem: false
    229        terminationMessagePath: /dev/termination-log
    230        terminationMessagePolicy: File
    231      dnsPolicy: ClusterFirst
    232      enableServiceLinks: false
    233      restartPolicy: Always
    234      schedulerName: default-scheduler
    235      securityContext: {}
    236      terminationGracePeriodSeconds: 30
    237status:
    238  conditions:
    239  - lastTransitionTime: &quot;2022-03-17T12:47:35Z&quot;
    240    lastUpdateTime: &quot;2022-03-17T12:47:35Z&quot;
    241    message: Deployment does not have minimum availability.
    242    reason: MinimumReplicasUnavailable
    243    status: &quot;False&quot;
    244    type: Available
    245  - lastTransitionTime: &quot;2022-03-20T12:16:56Z&quot;
    246    lastUpdateTime: &quot;2022-03-20T12:16:56Z&quot;
    247    message: ReplicaSet &quot;hello-openfaas2-5d6c7c7fb4&quot; has timed out progressing.
    248    reason: ProgressDeadlineExceeded
    249    status: &quot;False&quot;
    250    type: Progressing
    251  observedGeneration: 6
    252  replicas: 2
    253  unavailableReplicas: 2
    254  updatedReplicas: 1
    255docker@minikube:~$ docker run  --name wm -ti wm/hello-openfaas2:0.1
    2562022/03/20 13:04:52 Version: 0.2.0  SHA: 56bf6aac54deb3863a690f5fc03a2a38e7d9e6ef
    2572022/03/20 13:04:52 Timeouts: read: 5s write: 5s hard: 0s health: 5s.
    2582022/03/20 13:04:52 Listening on port: 8080
    259...
    260
    261

    and another shell

    1$ docker images | grep hello-openfaas
    2wm/hello-openfaas                                     latest                          bd08d01ce09b   34 minutes ago      65.2MB
    3$ faas-cli deploy -f ./hello-openfaas.yml 
    4Deploying: hello-openfaas.
    5WARNING! You are not using an encrypted connection to the gateway, consider using HTTPS.
    6
    7Deployed. 202 Accepted.
    8URL: http://IP:8099/function/hello-openfaas
    9see the helm chart for how to set the ImagePullPolicy
    10$ kubectl logs -n openfaas-fn deploy/hello-openfaas
    11Error from server (BadRequest): container &quot;hello-openfaas&quot; in pod &quot;hello-openfaas-558f99477f-wd697&quot; is waiting to start: trying and failing to pull image
    12
    13$ kubectl describe -n openfaas-fn deploy/hello-openfaas
    14Name:                   hello-openfaas
    15Namespace:              openfaas-fn
    16CreationTimestamp:      Wed, 16 Mar 2022 14:59:49 +0800
    17Labels:                 faas_function=hello-openfaas
    18Annotations:            deployment.kubernetes.io/revision: 1
    19                        prometheus.io.scrape: false
    20Selector:               faas_function=hello-openfaas
    21Replicas:               1 desired | 1 updated | 1 total | 0 available | 1 unavailable
    22StrategyType:           RollingUpdate
    23MinReadySeconds:        0
    24RollingUpdateStrategy:  0 max unavailable, 1 max surge
    25Pod Template:
    26  Labels:       faas_function=hello-openfaas
    27  Annotations:  prometheus.io.scrape: false
    28  Containers:
    29   hello-openfaas:
    30    Image:      wm/hello-openfaas:latest
    31    Port:       8080/TCP
    32    Host Port:  0/TCP
    33    Liveness:   http-get http://:8080/_/health delay=2s timeout=1s period=2s #success=1 #failure=3
    34    Readiness:  http-get http://:8080/_/health delay=2s timeout=1s period=2s #success=1 #failure=3
    35    Environment:
    36      fprocess:  python3 index.py
    37    Mounts:      &lt;none&gt;
    38  Volumes:       &lt;none&gt;
    39Conditions:
    40  Type           Status  Reason
    41  ----           ------  ------
    42  Available      False   MinimumReplicasUnavailable
    43  Progressing    False   ProgressDeadlineExceeded
    44OldReplicaSets:  &lt;none&gt;
    45NewReplicaSet:   hello-openfaas-558f99477f (1/1 replicas created)
    46Events:
    47  Type    Reason             Age   From                   Message
    48  ----    ------             ----  ----                   -------
    49  Normal  ScalingReplicaSet  29m   deployment-controller  Scaled up replica set hello-openfaas-558f99477f to 1
    50version: 1.0
    51provider:
    52  name: openfaas
    53  gateway: http://IP:8099
    54functions:
    55  hello-openfaas:
    56    lang: python3
    57    handler: ./hello-openfaas
    58    image: wm/hello-openfaas:latest
    59    imagePullPolicy: Never
    60$ faas-cli new --lang python3 hello-openfaas2 --prefix=&quot;wm&quot;
    61Folder: hello-openfaas2 created.
    62# I add `imagePullPolicy: Never` to `hello-openfaas2.yml`
    63$ faas-cli build -f ./hello-openfaas2.yml 
    64$ faas-cli deploy -f ./hello-openfaas2.yml 
    65Deploying: hello-openfaas2.
    66WARNING! You are not using an encrypted connection to the gateway, consider using HTTPS.
    67
    68Deployed. 202 Accepted.
    69URL: http://192.168.1.3:8099/function/hello-openfaas2
    70
    71
    72$ kubectl logs -n openfaas-fn deploy/hello-openfaas2
    73Error from server (BadRequest): container &quot;hello-openfaas2&quot; in pod &quot;hello-openfaas2-7c67488865-7d7vm&quot; is waiting to start: image can't be pulled
    74
    75$ kubectl get pods --all-namespaces
    76NAMESPACE              NAME                                        READY   STATUS             RESTARTS         AGE
    77kube-system            coredns-64897985d-kp7vf                     1/1     Running            0                47h
    78...
    79openfaas-fn            env-6c79f7b946-bzbtm                        1/1     Running            0                4h28m
    80openfaas-fn            figlet-54db496f88-957xl                     1/1     Running            0                18h
    81openfaas-fn            hello-openfaas-547857b9d6-z277c             0/1     ImagePullBackOff   0                127m
    82openfaas-fn            hello-openfaas-7b6946b4f9-hcvq4             0/1     ImagePullBackOff   0                165m
    83openfaas-fn            hello-openfaas2-7c67488865-qmrkl            0/1     ImagePullBackOff   0                13m
    84openfaas-fn            hello-openfaas3-65847b8b67-b94kd            0/1     ImagePullBackOff   0                97m
    85openfaas-fn            hello-python-554b464498-zxcdv               0/1     ErrImagePull       0                3h23m
    86openfaas-fn            hello-python-8698bc68bd-62gh9               0/1     ImagePullBackOff   0                3h25m
    87$ docker images
    88REPOSITORY                                TAG        IMAGE ID       CREATED             SIZE
    89wm/hello-openfaas2                        0.1        03c21bd96d5e   About an hour ago   65.2MB
    90python                                    3-alpine   69fba17b9bae   12 days ago         48.6MB
    91ghcr.io/openfaas/figlet                   latest     ca5eef0de441   2 weeks ago         14.8MB
    92ghcr.io/openfaas/alpine                   latest     35f3d4be6bb8   2 weeks ago         14.2MB
    93ghcr.io/openfaas/faas-netes               0.14.2     524b510505ec   3 weeks ago         77.3MB
    94k8s.gcr.io/kube-apiserver                 v1.23.3    f40be0088a83   7 weeks ago         135MB
    95k8s.gcr.io/kube-controller-manager        v1.23.3    b07520cd7ab7   7 weeks ago         125MB
    96k8s.gcr.io/kube-scheduler                 v1.23.3    99a3486be4f2   7 weeks ago         53.5MB
    97k8s.gcr.io/kube-proxy                     v1.23.3    9b7cc9982109   7 weeks ago         112MB
    98ghcr.io/openfaas/gateway                  0.21.3     ab4851262cd1   7 weeks ago         30.6MB
    99ghcr.io/openfaas/basic-auth               0.21.3     16e7168a17a3   7 weeks ago         14.3MB
    100k8s.gcr.io/etcd                           3.5.1-0    25f8c7f3da61   4 months ago        293MB
    101ghcr.io/openfaas/classic-watchdog         0.2.0      6f97aa96da81   4 months ago        8.18MB
    102k8s.gcr.io/coredns/coredns                v1.8.6     a4ca41631cc7   5 months ago        46.8MB
    103k8s.gcr.io/pause                          3.6        6270bb605e12   6 months ago        683kB
    104ghcr.io/openfaas/queue-worker             0.12.2     56e7216201bc   7 months ago        7.97MB
    105kubernetesui/dashboard                    v2.3.1     e1482a24335a   9 months ago        220MB
    106kubernetesui/metrics-scraper              v1.0.7     7801cfc6d5c0   9 months ago        34.4MB
    107nats-streaming                            0.22.0     12f2d32e0c9a   9 months ago        19.8MB
    108gcr.io/k8s-minikube/storage-provisioner   v5         6e38f40d628d   11 months ago       31.5MB
    109functions/markdown-render                 latest     93b5da182216   2 years ago         24.6MB
    110functions/hubstats                        latest     01affa91e9e4   2 years ago         29.3MB
    111functions/nodeinfo                        latest     2fe8a87bf79c   2 years ago         71.4MB
    112functions/alpine                          latest     46c6f6d74471   2 years ago         21.5MB
    113prom/prometheus                           v2.11.0    b97ed892eb23   2 years ago         126MB
    114prom/alertmanager                         v0.18.0    ce3c87f17369   2 years ago         51.9MB
    115alexellis2/openfaas-colorization          0.4.1      d36b67b1b5c1   2 years ago         1.84GB
    116rorpage/text-to-speech                    latest     5dc20810eb54   2 years ago         86.9MB
    117stefanprodan/faas-grafana                 4.6.3      2a4bd9caea50   4 years ago         284MB
    118
    119$ kubectl get pods --all-namespaces
    120NAMESPACE              NAME                                        READY   STATUS             RESTARTS        AGE
    121kube-system            coredns-64897985d-kp7vf                     1/1     Running            0               6d
    122kube-system            etcd-minikube                               1/1     Running            0               6d
    123kube-system            kube-apiserver-minikube                     1/1     Running            0               6d
    124kube-system            kube-controller-manager-minikube            1/1     Running            0               6d
    125kube-system            kube-proxy-5m8lr                            1/1     Running            0               6d
    126kube-system            kube-scheduler-minikube                     1/1     Running            0               6d
    127kube-system            storage-provisioner                         1/1     Running            1 (6d ago)      6d
    128kubernetes-dashboard   dashboard-metrics-scraper-58549894f-97tsv   1/1     Running            0               5d7h
    129kubernetes-dashboard   kubernetes-dashboard-ccd587f44-lkwcx        1/1     Running            0               5d7h
    130openfaas-fn            base64-6bdbcdb64c-djz8f                     1/1     Running            0               5d1h
    131openfaas-fn            colorise-85c74c686b-2fz66                   1/1     Running            0               4d5h
    132openfaas-fn            echoit-5d7df6684c-k6ljn                     1/1     Running            0               5d1h
    133openfaas-fn            env-6c79f7b946-bzbtm                        1/1     Running            0               4d5h
    134openfaas-fn            figlet-54db496f88-957xl                     1/1     Running            0               4d19h
    135openfaas-fn            hello-openfaas-547857b9d6-z277c             0/1     ImagePullBackOff   0               4d3h
    136openfaas-fn            hello-openfaas-7b6946b4f9-hcvq4             0/1     ImagePullBackOff   0               4d3h
    137openfaas-fn            hello-openfaas2-5c6f6cb5d9-24hkz            0/1     ImagePullBackOff   0               9m22s
    138openfaas-fn            hello-openfaas2-8957bb47b-7cgjg             0/1     ImagePullBackOff   0               2d22h
    139openfaas-fn            hello-openfaas3-65847b8b67-b94kd            0/1     ImagePullBackOff   0               4d2h
    140openfaas-fn            hello-python-6d6976845f-cwsln               0/1     ImagePullBackOff   0               3d19h
    141openfaas-fn            hello-python-b577cb8dc-64wf5                0/1     ImagePullBackOff   0               3d9h
    142openfaas-fn            hubstats-b6cd4dccc-z8tvl                    1/1     Running            0               5d1h
    143openfaas-fn            markdown-68f69f47c8-w5m47                   1/1     Running            0               5d1h
    144openfaas-fn            nodeinfo-d48cbbfcc-hfj79                    1/1     Running            0               5d1h
    145openfaas-fn            openfaas2-fun                               1/1     Running            0               15s
    146openfaas-fn            text-to-speech-74ffcdfd7-997t4              0/1     CrashLoopBackOff   2235 (3s ago)   4d5h
    147openfaas-fn            wordcount-6489865566-cvfzr                  1/1     Running            0               5d1h
    148openfaas               alertmanager-88449c789-fq2rg                1/1     Running            0               3d1h
    149openfaas               basic-auth-plugin-75fd7d69c5-zw4jh          1/1     Running            0               3d2h
    150openfaas               gateway-5c4bb7c5d7-n8h27                    2/2     Running            0               3d2h
    151openfaas               grafana                                     1/1     Running            0               4d8h
    152openfaas               nats-647b476664-hkr7p                       1/1     Running            0               3d2h
    153openfaas               prometheus-687648749f-tl8jp                 1/1     Running            0               3d1h
    154openfaas               queue-worker-7777ffd7f6-htx6t               1/1     Running            0               3d2h
    155
    156
    157$ kubectl get -o yaml -n openfaas-fn deploy/hello-openfaas2
    158apiVersion: apps/v1
    159kind: Deployment
    160metadata:
    161  annotations:
    162    deployment.kubernetes.io/revision: &quot;6&quot;
    163    prometheus.io.scrape: &quot;false&quot;
    164  creationTimestamp: &quot;2022-03-17T12:47:35Z&quot;
    165  generation: 6
    166  labels:
    167    faas_function: hello-openfaas2
    168  name: hello-openfaas2
    169  namespace: openfaas-fn
    170  resourceVersion: &quot;400833&quot;
    171  uid: 9c4e9d26-23af-4f93-8538-4e2d96f0d7e0
    172spec:
    173  progressDeadlineSeconds: 600
    174  replicas: 1
    175  revisionHistoryLimit: 10
    176  selector:
    177    matchLabels:
    178      faas_function: hello-openfaas2
    179  strategy:
    180    rollingUpdate:
    181      maxSurge: 1
    182      maxUnavailable: 0
    183    type: RollingUpdate
    184  template:
    185    metadata:
    186      annotations:
    187        prometheus.io.scrape: &quot;false&quot;
    188      creationTimestamp: null
    189      labels:
    190        faas_function: hello-openfaas2
    191        uid: &quot;969512830&quot;
    192      name: hello-openfaas2
    193    spec:
    194      containers:
    195      - env:
    196        - name: fprocess
    197          value: python3 index.py
    198        image: wm/hello-openfaas2:0.1
    199        imagePullPolicy: Always
    200        livenessProbe:
    201          failureThreshold: 3
    202          httpGet:
    203            path: /_/health
    204            port: 8080
    205            scheme: HTTP
    206          initialDelaySeconds: 2
    207          periodSeconds: 2
    208          successThreshold: 1
    209          timeoutSeconds: 1
    210        name: hello-openfaas2
    211        ports:
    212        - containerPort: 8080
    213          name: http
    214          protocol: TCP
    215        readinessProbe:
    216          failureThreshold: 3
    217          httpGet:
    218            path: /_/health
    219            port: 8080
    220            scheme: HTTP
    221          initialDelaySeconds: 2
    222          periodSeconds: 2
    223          successThreshold: 1
    224          timeoutSeconds: 1
    225        resources: {}
    226        securityContext:
    227          allowPrivilegeEscalation: false
    228          readOnlyRootFilesystem: false
    229        terminationMessagePath: /dev/termination-log
    230        terminationMessagePolicy: File
    231      dnsPolicy: ClusterFirst
    232      enableServiceLinks: false
    233      restartPolicy: Always
    234      schedulerName: default-scheduler
    235      securityContext: {}
    236      terminationGracePeriodSeconds: 30
    237status:
    238  conditions:
    239  - lastTransitionTime: &quot;2022-03-17T12:47:35Z&quot;
    240    lastUpdateTime: &quot;2022-03-17T12:47:35Z&quot;
    241    message: Deployment does not have minimum availability.
    242    reason: MinimumReplicasUnavailable
    243    status: &quot;False&quot;
    244    type: Available
    245  - lastTransitionTime: &quot;2022-03-20T12:16:56Z&quot;
    246    lastUpdateTime: &quot;2022-03-20T12:16:56Z&quot;
    247    message: ReplicaSet &quot;hello-openfaas2-5d6c7c7fb4&quot; has timed out progressing.
    248    reason: ProgressDeadlineExceeded
    249    status: &quot;False&quot;
    250    type: Progressing
    251  observedGeneration: 6
    252  replicas: 2
    253  unavailableReplicas: 2
    254  updatedReplicas: 1
    255docker@minikube:~$ docker run  --name wm -ti wm/hello-openfaas2:0.1
    2562022/03/20 13:04:52 Version: 0.2.0  SHA: 56bf6aac54deb3863a690f5fc03a2a38e7d9e6ef
    2572022/03/20 13:04:52 Timeouts: read: 5s write: 5s hard: 0s health: 5s.
    2582022/03/20 13:04:52 Listening on port: 8080
    259...
    260
    261docker@minikube:~$ docker ps | grep wm
    262d7796286641c   wm/hello-openfaas2:0.1             &quot;fwatchdog&quot;              3 minutes ago       Up 3 minutes (healthy)   8080/tcp   wm
    263

    ANSWER

    Answered 2022-Mar-16 at 08:10

    If your image has a latest tag, the Pod's ImagePullPolicy will be automatically set to Always. Each time the pod is created, Kubernetes tries to pull the newest image.

    Try not tagging the image as latest or manually setting the Pod's ImagePullPolicy to Never. If you're using static manifest to create a Pod, the setting will be like the following:

    1$ docker images | grep hello-openfaas
    2wm/hello-openfaas                                     latest                          bd08d01ce09b   34 minutes ago      65.2MB
    3$ faas-cli deploy -f ./hello-openfaas.yml 
    4Deploying: hello-openfaas.
    5WARNING! You are not using an encrypted connection to the gateway, consider using HTTPS.
    6
    7Deployed. 202 Accepted.
    8URL: http://IP:8099/function/hello-openfaas
    9see the helm chart for how to set the ImagePullPolicy
    10$ kubectl logs -n openfaas-fn deploy/hello-openfaas
    11Error from server (BadRequest): container &quot;hello-openfaas&quot; in pod &quot;hello-openfaas-558f99477f-wd697&quot; is waiting to start: trying and failing to pull image
    12
    13$ kubectl describe -n openfaas-fn deploy/hello-openfaas
    14Name:                   hello-openfaas
    15Namespace:              openfaas-fn
    16CreationTimestamp:      Wed, 16 Mar 2022 14:59:49 +0800
    17Labels:                 faas_function=hello-openfaas
    18Annotations:            deployment.kubernetes.io/revision: 1
    19                        prometheus.io.scrape: false
    20Selector:               faas_function=hello-openfaas
    21Replicas:               1 desired | 1 updated | 1 total | 0 available | 1 unavailable
    22StrategyType:           RollingUpdate
    23MinReadySeconds:        0
    24RollingUpdateStrategy:  0 max unavailable, 1 max surge
    25Pod Template:
    26  Labels:       faas_function=hello-openfaas
    27  Annotations:  prometheus.io.scrape: false
    28  Containers:
    29   hello-openfaas:
    30    Image:      wm/hello-openfaas:latest
    31    Port:       8080/TCP
    32    Host Port:  0/TCP
    33    Liveness:   http-get http://:8080/_/health delay=2s timeout=1s period=2s #success=1 #failure=3
    34    Readiness:  http-get http://:8080/_/health delay=2s timeout=1s period=2s #success=1 #failure=3
    35    Environment:
    36      fprocess:  python3 index.py
    37    Mounts:      &lt;none&gt;
    38  Volumes:       &lt;none&gt;
    39Conditions:
    40  Type           Status  Reason
    41  ----           ------  ------
    42  Available      False   MinimumReplicasUnavailable
    43  Progressing    False   ProgressDeadlineExceeded
    44OldReplicaSets:  &lt;none&gt;
    45NewReplicaSet:   hello-openfaas-558f99477f (1/1 replicas created)
    46Events:
    47  Type    Reason             Age   From                   Message
    48  ----    ------             ----  ----                   -------
    49  Normal  ScalingReplicaSet  29m   deployment-controller  Scaled up replica set hello-openfaas-558f99477f to 1
    50version: 1.0
    51provider:
    52  name: openfaas
    53  gateway: http://IP:8099
    54functions:
    55  hello-openfaas:
    56    lang: python3
    57    handler: ./hello-openfaas
    58    image: wm/hello-openfaas:latest
    59    imagePullPolicy: Never
    60$ faas-cli new --lang python3 hello-openfaas2 --prefix=&quot;wm&quot;
    61Folder: hello-openfaas2 created.
    62# I add `imagePullPolicy: Never` to `hello-openfaas2.yml`
    63$ faas-cli build -f ./hello-openfaas2.yml 
    64$ faas-cli deploy -f ./hello-openfaas2.yml 
    65Deploying: hello-openfaas2.
    66WARNING! You are not using an encrypted connection to the gateway, consider using HTTPS.
    67
    68Deployed. 202 Accepted.
    69URL: http://192.168.1.3:8099/function/hello-openfaas2
    70
    71
    72$ kubectl logs -n openfaas-fn deploy/hello-openfaas2
    73Error from server (BadRequest): container &quot;hello-openfaas2&quot; in pod &quot;hello-openfaas2-7c67488865-7d7vm&quot; is waiting to start: image can't be pulled
    74
    75$ kubectl get pods --all-namespaces
    76NAMESPACE              NAME                                        READY   STATUS             RESTARTS         AGE
    77kube-system            coredns-64897985d-kp7vf                     1/1     Running            0                47h
    78...
    79openfaas-fn            env-6c79f7b946-bzbtm                        1/1     Running            0                4h28m
    80openfaas-fn            figlet-54db496f88-957xl                     1/1     Running            0                18h
    81openfaas-fn            hello-openfaas-547857b9d6-z277c             0/1     ImagePullBackOff   0                127m
    82openfaas-fn            hello-openfaas-7b6946b4f9-hcvq4             0/1     ImagePullBackOff   0                165m
    83openfaas-fn            hello-openfaas2-7c67488865-qmrkl            0/1     ImagePullBackOff   0                13m
    84openfaas-fn            hello-openfaas3-65847b8b67-b94kd            0/1     ImagePullBackOff   0                97m
    85openfaas-fn            hello-python-554b464498-zxcdv               0/1     ErrImagePull       0                3h23m
    86openfaas-fn            hello-python-8698bc68bd-62gh9               0/1     ImagePullBackOff   0                3h25m
    87$ docker images
    88REPOSITORY                                TAG        IMAGE ID       CREATED             SIZE
    89wm/hello-openfaas2                        0.1        03c21bd96d5e   About an hour ago   65.2MB
    90python                                    3-alpine   69fba17b9bae   12 days ago         48.6MB
    91ghcr.io/openfaas/figlet                   latest     ca5eef0de441   2 weeks ago         14.8MB
    92ghcr.io/openfaas/alpine                   latest     35f3d4be6bb8   2 weeks ago         14.2MB
    93ghcr.io/openfaas/faas-netes               0.14.2     524b510505ec   3 weeks ago         77.3MB
    94k8s.gcr.io/kube-apiserver                 v1.23.3    f40be0088a83   7 weeks ago         135MB
    95k8s.gcr.io/kube-controller-manager        v1.23.3    b07520cd7ab7   7 weeks ago         125MB
    96k8s.gcr.io/kube-scheduler                 v1.23.3    99a3486be4f2   7 weeks ago         53.5MB
    97k8s.gcr.io/kube-proxy                     v1.23.3    9b7cc9982109   7 weeks ago         112MB
    98ghcr.io/openfaas/gateway                  0.21.3     ab4851262cd1   7 weeks ago         30.6MB
    99ghcr.io/openfaas/basic-auth               0.21.3     16e7168a17a3   7 weeks ago         14.3MB
    100k8s.gcr.io/etcd                           3.5.1-0    25f8c7f3da61   4 months ago        293MB
    101ghcr.io/openfaas/classic-watchdog         0.2.0      6f97aa96da81   4 months ago        8.18MB
    102k8s.gcr.io/coredns/coredns                v1.8.6     a4ca41631cc7   5 months ago        46.8MB
    103k8s.gcr.io/pause                          3.6        6270bb605e12   6 months ago        683kB
    104ghcr.io/openfaas/queue-worker             0.12.2     56e7216201bc   7 months ago        7.97MB
    105kubernetesui/dashboard                    v2.3.1     e1482a24335a   9 months ago        220MB
    106kubernetesui/metrics-scraper              v1.0.7     7801cfc6d5c0   9 months ago        34.4MB
    107nats-streaming                            0.22.0     12f2d32e0c9a   9 months ago        19.8MB
    108gcr.io/k8s-minikube/storage-provisioner   v5         6e38f40d628d   11 months ago       31.5MB
    109functions/markdown-render                 latest     93b5da182216   2 years ago         24.6MB
    110functions/hubstats                        latest     01affa91e9e4   2 years ago         29.3MB
    111functions/nodeinfo                        latest     2fe8a87bf79c   2 years ago         71.4MB
    112functions/alpine                          latest     46c6f6d74471   2 years ago         21.5MB
    113prom/prometheus                           v2.11.0    b97ed892eb23   2 years ago         126MB
    114prom/alertmanager                         v0.18.0    ce3c87f17369   2 years ago         51.9MB
    115alexellis2/openfaas-colorization          0.4.1      d36b67b1b5c1   2 years ago         1.84GB
    116rorpage/text-to-speech                    latest     5dc20810eb54   2 years ago         86.9MB
    117stefanprodan/faas-grafana                 4.6.3      2a4bd9caea50   4 years ago         284MB
    118
    119$ kubectl get pods --all-namespaces
    120NAMESPACE              NAME                                        READY   STATUS             RESTARTS        AGE
    121kube-system            coredns-64897985d-kp7vf                     1/1     Running            0               6d
    122kube-system            etcd-minikube                               1/1     Running            0               6d
    123kube-system            kube-apiserver-minikube                     1/1     Running            0               6d
    124kube-system            kube-controller-manager-minikube            1/1     Running            0               6d
    125kube-system            kube-proxy-5m8lr                            1/1     Running            0               6d
    126kube-system            kube-scheduler-minikube                     1/1     Running            0               6d
    127kube-system            storage-provisioner                         1/1     Running            1 (6d ago)      6d
    128kubernetes-dashboard   dashboard-metrics-scraper-58549894f-97tsv   1/1     Running            0               5d7h
    129kubernetes-dashboard   kubernetes-dashboard-ccd587f44-lkwcx        1/1     Running            0               5d7h
    130openfaas-fn            base64-6bdbcdb64c-djz8f                     1/1     Running            0               5d1h
    131openfaas-fn            colorise-85c74c686b-2fz66                   1/1     Running            0               4d5h
    132openfaas-fn            echoit-5d7df6684c-k6ljn                     1/1     Running            0               5d1h
    133openfaas-fn            env-6c79f7b946-bzbtm                        1/1     Running            0               4d5h
    134openfaas-fn            figlet-54db496f88-957xl                     1/1     Running            0               4d19h
    135openfaas-fn            hello-openfaas-547857b9d6-z277c             0/1     ImagePullBackOff   0               4d3h
    136openfaas-fn            hello-openfaas-7b6946b4f9-hcvq4             0/1     ImagePullBackOff   0               4d3h
    137openfaas-fn            hello-openfaas2-5c6f6cb5d9-24hkz            0/1     ImagePullBackOff   0               9m22s
    138openfaas-fn            hello-openfaas2-8957bb47b-7cgjg             0/1     ImagePullBackOff   0               2d22h
    139openfaas-fn            hello-openfaas3-65847b8b67-b94kd            0/1     ImagePullBackOff   0               4d2h
    140openfaas-fn            hello-python-6d6976845f-cwsln               0/1     ImagePullBackOff   0               3d19h
    141openfaas-fn            hello-python-b577cb8dc-64wf5                0/1     ImagePullBackOff   0               3d9h
    142openfaas-fn            hubstats-b6cd4dccc-z8tvl                    1/1     Running            0               5d1h
    143openfaas-fn            markdown-68f69f47c8-w5m47                   1/1     Running            0               5d1h
    144openfaas-fn            nodeinfo-d48cbbfcc-hfj79                    1/1     Running            0               5d1h
    145openfaas-fn            openfaas2-fun                               1/1     Running            0               15s
    146openfaas-fn            text-to-speech-74ffcdfd7-997t4              0/1     CrashLoopBackOff   2235 (3s ago)   4d5h
    147openfaas-fn            wordcount-6489865566-cvfzr                  1/1     Running            0               5d1h
    148openfaas               alertmanager-88449c789-fq2rg                1/1     Running            0               3d1h
    149openfaas               basic-auth-plugin-75fd7d69c5-zw4jh          1/1     Running            0               3d2h
    150openfaas               gateway-5c4bb7c5d7-n8h27                    2/2     Running            0               3d2h
    151openfaas               grafana                                     1/1     Running            0               4d8h
    152openfaas               nats-647b476664-hkr7p                       1/1     Running            0               3d2h
    153openfaas               prometheus-687648749f-tl8jp                 1/1     Running            0               3d1h
    154openfaas               queue-worker-7777ffd7f6-htx6t               1/1     Running            0               3d2h
    155
    156
    157$ kubectl get -o yaml -n openfaas-fn deploy/hello-openfaas2
    158apiVersion: apps/v1
    159kind: Deployment
    160metadata:
    161  annotations:
    162    deployment.kubernetes.io/revision: &quot;6&quot;
    163    prometheus.io.scrape: &quot;false&quot;
    164  creationTimestamp: &quot;2022-03-17T12:47:35Z&quot;
    165  generation: 6
    166  labels:
    167    faas_function: hello-openfaas2
    168  name: hello-openfaas2
    169  namespace: openfaas-fn
    170  resourceVersion: &quot;400833&quot;
    171  uid: 9c4e9d26-23af-4f93-8538-4e2d96f0d7e0
    172spec:
    173  progressDeadlineSeconds: 600
    174  replicas: 1
    175  revisionHistoryLimit: 10
    176  selector:
    177    matchLabels:
    178      faas_function: hello-openfaas2
    179  strategy:
    180    rollingUpdate:
    181      maxSurge: 1
    182      maxUnavailable: 0
    183    type: RollingUpdate
    184  template:
    185    metadata:
    186      annotations:
    187        prometheus.io.scrape: &quot;false&quot;
    188      creationTimestamp: null
    189      labels:
    190        faas_function: hello-openfaas2
    191        uid: &quot;969512830&quot;
    192      name: hello-openfaas2
    193    spec:
    194      containers:
    195      - env:
    196        - name: fprocess
    197          value: python3 index.py
    198        image: wm/hello-openfaas2:0.1
    199        imagePullPolicy: Always
    200        livenessProbe:
    201          failureThreshold: 3
    202          httpGet:
    203            path: /_/health
    204            port: 8080
    205            scheme: HTTP
    206          initialDelaySeconds: 2
    207          periodSeconds: 2
    208          successThreshold: 1
    209          timeoutSeconds: 1
    210        name: hello-openfaas2
    211        ports:
    212        - containerPort: 8080
    213          name: http
    214          protocol: TCP
    215        readinessProbe:
    216          failureThreshold: 3
    217          httpGet:
    218            path: /_/health
    219            port: 8080
    220            scheme: HTTP
    221          initialDelaySeconds: 2
    222          periodSeconds: 2
    223          successThreshold: 1
    224          timeoutSeconds: 1
    225        resources: {}
    226        securityContext:
    227          allowPrivilegeEscalation: false
    228          readOnlyRootFilesystem: false
    229        terminationMessagePath: /dev/termination-log
    230        terminationMessagePolicy: File
    231      dnsPolicy: ClusterFirst
    232      enableServiceLinks: false
    233      restartPolicy: Always
    234      schedulerName: default-scheduler
    235      securityContext: {}
    236      terminationGracePeriodSeconds: 30
    237status:
    238  conditions:
    239  - lastTransitionTime: &quot;2022-03-17T12:47:35Z&quot;
    240    lastUpdateTime: &quot;2022-03-17T12:47:35Z&quot;
    241    message: Deployment does not have minimum availability.
    242    reason: MinimumReplicasUnavailable
    243    status: &quot;False&quot;
    244    type: Available
    245  - lastTransitionTime: &quot;2022-03-20T12:16:56Z&quot;
    246    lastUpdateTime: &quot;2022-03-20T12:16:56Z&quot;
    247    message: ReplicaSet &quot;hello-openfaas2-5d6c7c7fb4&quot; has timed out progressing.
    248    reason: ProgressDeadlineExceeded
    249    status: &quot;False&quot;
    250    type: Progressing
    251  observedGeneration: 6
    252  replicas: 2
    253  unavailableReplicas: 2
    254  updatedReplicas: 1
    255docker@minikube:~$ docker run  --name wm -ti wm/hello-openfaas2:0.1
    2562022/03/20 13:04:52 Version: 0.2.0  SHA: 56bf6aac54deb3863a690f5fc03a2a38e7d9e6ef
    2572022/03/20 13:04:52 Timeouts: read: 5s write: 5s hard: 0s health: 5s.
    2582022/03/20 13:04:52 Listening on port: 8080
    259...
    260
    261docker@minikube:~$ docker ps | grep wm
    262d7796286641c   wm/hello-openfaas2:0.1             &quot;fwatchdog&quot;              3 minutes ago       Up 3 minutes (healthy)   8080/tcp   wm
    263containers:
    264  - name: test-container
    265    image: testImage:latest
    266    imagePullPolicy: Never
    267

    Source https://stackoverflow.com/questions/71493306

    QUESTION

    How do i loop through divs using jsoup

    Asked 2022-Feb-15 at 17:19

    Hi guys I'm using jsoup in a java webapplication on IntelliJ. I'm trying to scrape data of port call events from a shiptracking website and store the data in a mySQL database.

    The data for the events is organised in divs with the class name table-group and the values are in another div with the class name table-row.
    My problem is the divs rows for all the vessel are all the same class name and im trying to loop through each row and push the data to a database. So far i have managed to create a java class to scrape the first row.
    How can i loop through each row and store those values to my database. Should i create an array list to store the values?



    this is my scraper class

    1public class Scarper {
    2
    3    private static Document doc;
    4
    5    public static void main(String[] args) {
    6
    7        final String url =
    8                &quot;https://www.myshiptracking.com/ports-arrivals-departures/?mmsi=&amp;pid=277&amp;type=0&amp;time=&amp;pp=20&quot;;
    9
    10        try {
    11
    12            doc = Jsoup.connect(url).get();
    13        } catch (IOException e) {
    14            e.printStackTrace();
    15        }
    16        Events();
    17    }
    18
    19    public static void Events() {
    20        Elements elm = doc.select(&quot;div.table-group:nth-of-type(2) &gt; .table-row&quot;);
    21
    22        List&lt;String&gt; arrayList = new ArrayList();
    23
    24        for (Element ele : elm) {
    25
    26            String event = ele.select(&quot;div.col:nth-of-type(2)&quot;).text();
    27            String time = ele.select(&quot;div.col:nth-of-type(3)&quot;).text();
    28            String port = ele.select(&quot;div.col:nth-of-type(4)&quot;).text();
    29            String vessel = ele.select(&quot;.td_vesseltype.col&quot;).text();
    30            Event ev = new Event();
    31            System.out.println(event);
    32            System.out.println(time);
    33            System.out.println(port);
    34            System.out.println(vessel);
    35        }
    36    }
    37}
    38

    sample of the div classes i want to scrape

    1public class Scarper {
    2
    3    private static Document doc;
    4
    5    public static void main(String[] args) {
    6
    7        final String url =
    8                &quot;https://www.myshiptracking.com/ports-arrivals-departures/?mmsi=&amp;pid=277&amp;type=0&amp;time=&amp;pp=20&quot;;
    9
    10        try {
    11
    12            doc = Jsoup.connect(url).get();
    13        } catch (IOException e) {
    14            e.printStackTrace();
    15        }
    16        Events();
    17    }
    18
    19    public static void Events() {
    20        Elements elm = doc.select(&quot;div.table-group:nth-of-type(2) &gt; .table-row&quot;);
    21
    22        List&lt;String&gt; arrayList = new ArrayList();
    23
    24        for (Element ele : elm) {
    25
    26            String event = ele.select(&quot;div.col:nth-of-type(2)&quot;).text();
    27            String time = ele.select(&quot;div.col:nth-of-type(3)&quot;).text();
    28            String port = ele.select(&quot;div.col:nth-of-type(4)&quot;).text();
    29            String vessel = ele.select(&quot;.td_vesseltype.col&quot;).text();
    30            Event ev = new Event();
    31            System.out.println(event);
    32            System.out.println(time);
    33            System.out.println(port);
    34            System.out.println(vessel);
    35        }
    36    }
    37}
    38&lt;div style=&quot;box-sizing: border-box;padding: 0px 10px 10px 10px;&quot;&gt;
    39            &lt;div class=&quot;cs-table&quot;&gt;
    40                &lt;div class=&quot;heading&quot;&gt;
    41                    &lt;div class=&quot;col&quot; style=&quot;width: 10px&quot;&gt;&lt;/div&gt;
    42                    &lt;div class=&quot;col&quot; style=&quot;width: 110px&quot;&gt;Event&lt;/div&gt;
    43                    &lt;div class=&quot;col&quot; style=&quot;width: 120px&quot;&gt;Time (&lt;span class=&quot;tooltip&quot; title=&quot;My Time: In your current TimeZone&quot;&gt;MT&lt;/span&gt;)&lt;/div&gt;
    44                    &lt;div class=&quot;col&quot; style=&quot;width: 150px&quot;&gt;Port&lt;/div&gt;
    45                    &lt;div class=&quot;col&quot;&gt;Vessel&lt;/div&gt;
    46                &lt;/div&gt;
    47                                    &lt;div class=&quot;table-group&quot;&gt;
    48                    &lt;div class=&quot;table-row&quot;&gt;
    49                        &lt;div class=&quot;col&quot;&gt;&lt;i class=&quot;fa fa-sign-out red&quot;&gt;&lt;/i&gt;&lt;/div&gt;
    50                        &lt;div class=&quot;col&quot;&gt;Departure&lt;/div&gt;
    51                        &lt;div class=&quot;col&quot; style=&quot;text-align: center;&quot;&gt;2022-02-14 &lt;b&gt;16:51&lt;/b&gt;&lt;/div&gt;
    52                        &lt;div class=&quot;col&quot;&gt;&lt;img class=&quot;flag_line tooltip&quot; src=&quot;/icons/flags2/16/GB.png&quot; title=&quot; United Kingdom&quot;/&gt;&lt;a href=&quot;/ports/port-of-belfast-in-gb-united-kingdom-id-101&quot;&gt;BELFAST&lt;/a&gt;&lt;/div&gt;
    53                        &lt;div class=&quot;col td_vesseltype&quot;&gt;&lt;img src=&quot;/icons/icon7_511.png&quot;&gt;&lt;span class=&quot;padding_18&quot;&gt;&lt;a href=&quot;/vessels/wilson-blyth-mmsi-314544000-imo-9124419&quot;&gt;WILSON BLYTH&lt;/a&gt; [GB]&lt;/span&gt;&lt;/div&gt;
    54                    &lt;/div&gt;
    55                &lt;/div&gt;
    56                                    &lt;div class=&quot;table-group&quot;&gt;
    57                    &lt;div class=&quot;table-row&quot;&gt;
    58                        &lt;div class=&quot;col&quot;&gt;&lt;i class=&quot;fa fa-flag-checkered green&quot;&gt;&lt;/i&gt;&lt;/div&gt;
    59                        &lt;div class=&quot;col&quot;&gt;Arrival&lt;/div&gt;
    60                        &lt;div class=&quot;col&quot; style=&quot;text-align: center;&quot;&gt;2022-02-14 &lt;b&gt;16:51&lt;/b&gt;&lt;/div&gt;
    61                        &lt;div class=&quot;col&quot;&gt;&lt;img class=&quot;flag_line tooltip&quot; src=&quot;/icons/flags2/16/GB.png&quot; title=&quot; United Kingdom&quot;/&gt;&lt;a href=&quot;/ports/port-of-hunters-quay-in-gb-united-kingdom-id-218&quot;&gt;HUNTERS QUAY&lt;/a&gt;&lt;/div&gt;
    62                        &lt;div class=&quot;col td_vesseltype&quot;&gt;&lt;img src=&quot;/icons/icon6_511.png&quot;&gt;&lt;span class=&quot;padding_18&quot;&gt;&lt;a href=&quot;/vessels/sound-of-soay-mmsi-235101063-imo-9665229&quot;&gt;SOUND OF SOAY&lt;/a&gt; [GB]&lt;/span&gt;&lt;/div&gt;
    63                    &lt;/div&gt;
    64                &lt;/div&gt;
    65                                    &lt;div class=&quot;table-group&quot;&gt;
    66                    &lt;div class=&quot;table-row&quot;&gt;
    67                        &lt;div class=&quot;col&quot;&gt;&lt;i class=&quot;fa fa-sign-out red&quot;&gt;&lt;/i&gt;&lt;/div&gt;
    68                        &lt;div class=&quot;col&quot;&gt;Departure&lt;/div&gt;
    69                        &lt;div class=&quot;col&quot; style=&quot;text-align: center;&quot;&gt;2022-02-14 &lt;b&gt;16:51&lt;/b&gt;&lt;/div&gt;
    70                        &lt;div class=&quot;col&quot;&gt;&lt;img class=&quot;flag_line tooltip&quot; src=&quot;/icons/flags2/16/GB.png&quot; title=&quot; United Kingdom&quot;/&gt;&lt;a href=&quot;/ports/port-of-largs-in-gb-united-kingdom-id-1602&quot;&gt;LARGS&lt;/a&gt;&lt;/div&gt;
    71                        &lt;div class=&quot;col td_vesseltype&quot;&gt;&lt;img src=&quot;/icons/icon6_511.png&quot;&gt;&lt;span class=&quot;padding_18&quot;&gt;&lt;a href=&quot;/vessels/loch-shira-mmsi-235053239-imo-9376919&quot;&gt;LOCH SHIRA&lt;/a&gt; [GB]&lt;/span&gt;&lt;/div&gt;
    72                    &lt;/div&gt;
    73                &lt;/div&gt;
    74                                    &lt;div class=&quot;table-group&quot;&gt;
    75                    &lt;div class=&quot;table-row&quot;&gt;
    76                        &lt;div class=&quot;col&quot;&gt;&lt;i class=&quot;fa fa-sign-out red&quot;&gt;&lt;/i&gt;&lt;/div&gt;
    77                        &lt;div class=&quot;col&quot;&gt;Departure&lt;/div&gt;
    78                        &lt;div class=&quot;col&quot; style=&quot;text-align: center;&quot;&gt;2022-02-14 &lt;b&gt;16:51&lt;/b&gt;&lt;/div&gt;
    79                        &lt;div class=&quot;col&quot;&gt;&lt;img class=&quot;flag_line tooltip&quot; src=&quot;/icons/flags2/16/GB.png&quot; title=&quot; United Kingdom&quot;/&gt;&lt;a href=&quot;/ports/port-of-ryde-in-gb-united-kingdom-id-1629&quot;&gt;RYDE&lt;/a&gt;&lt;/div&gt;
    80                        &lt;div class=&quot;col td_vesseltype&quot;&gt;&lt;img src=&quot;/icons/icon4_511.png&quot;&gt;&lt;span class=&quot;padding_18&quot;&gt;&lt;a href=&quot;/vessels/island-flyer-mmsi-235117772-imo-9737797&quot;&gt;ISLAND FLYER&lt;/a&gt; [GB]&lt;/span&gt;&lt;/div&gt;
    81                    &lt;/div&gt;
    82                &lt;/div&gt;
    83

    ANSWER

    Answered 2022-Feb-15 at 17:19

    You can start with looping over the table's rows: the selector for the table is .cs-table so you can get the table with Element table = doc.select(".cs-table").first();. Next you can get the table's rows with the selector div.table-row - Elements rows = doc.select("div.table-row"); now you can loop over all the rows and extract the data from each row. The code should look like:

    1public class Scarper {
    2
    3    private static Document doc;
    4
    5    public static void main(String[] args) {
    6
    7        final String url =
    8                &quot;https://www.myshiptracking.com/ports-arrivals-departures/?mmsi=&amp;pid=277&amp;type=0&amp;time=&amp;pp=20&quot;;
    9
    10        try {
    11
    12            doc = Jsoup.connect(url).get();
    13        } catch (IOException e) {
    14            e.printStackTrace();
    15        }
    16        Events();
    17    }
    18
    19    public static void Events() {
    20        Elements elm = doc.select(&quot;div.table-group:nth-of-type(2) &gt; .table-row&quot;);
    21
    22        List&lt;String&gt; arrayList = new ArrayList();
    23
    24        for (Element ele : elm) {
    25
    26            String event = ele.select(&quot;div.col:nth-of-type(2)&quot;).text();
    27            String time = ele.select(&quot;div.col:nth-of-type(3)&quot;).text();
    28            String port = ele.select(&quot;div.col:nth-of-type(4)&quot;).text();
    29            String vessel = ele.select(&quot;.td_vesseltype.col&quot;).text();
    30            Event ev = new Event();
    31            System.out.println(event);
    32            System.out.println(time);
    33            System.out.println(port);
    34            System.out.println(vessel);
    35        }
    36    }
    37}
    38&lt;div style=&quot;box-sizing: border-box;padding: 0px 10px 10px 10px;&quot;&gt;
    39            &lt;div class=&quot;cs-table&quot;&gt;
    40                &lt;div class=&quot;heading&quot;&gt;
    41                    &lt;div class=&quot;col&quot; style=&quot;width: 10px&quot;&gt;&lt;/div&gt;
    42                    &lt;div class=&quot;col&quot; style=&quot;width: 110px&quot;&gt;Event&lt;/div&gt;
    43                    &lt;div class=&quot;col&quot; style=&quot;width: 120px&quot;&gt;Time (&lt;span class=&quot;tooltip&quot; title=&quot;My Time: In your current TimeZone&quot;&gt;MT&lt;/span&gt;)&lt;/div&gt;
    44                    &lt;div class=&quot;col&quot; style=&quot;width: 150px&quot;&gt;Port&lt;/div&gt;
    45                    &lt;div class=&quot;col&quot;&gt;Vessel&lt;/div&gt;
    46                &lt;/div&gt;
    47                                    &lt;div class=&quot;table-group&quot;&gt;
    48                    &lt;div class=&quot;table-row&quot;&gt;
    49                        &lt;div class=&quot;col&quot;&gt;&lt;i class=&quot;fa fa-sign-out red&quot;&gt;&lt;/i&gt;&lt;/div&gt;
    50                        &lt;div class=&quot;col&quot;&gt;Departure&lt;/div&gt;
    51                        &lt;div class=&quot;col&quot; style=&quot;text-align: center;&quot;&gt;2022-02-14 &lt;b&gt;16:51&lt;/b&gt;&lt;/div&gt;
    52                        &lt;div class=&quot;col&quot;&gt;&lt;img class=&quot;flag_line tooltip&quot; src=&quot;/icons/flags2/16/GB.png&quot; title=&quot; United Kingdom&quot;/&gt;&lt;a href=&quot;/ports/port-of-belfast-in-gb-united-kingdom-id-101&quot;&gt;BELFAST&lt;/a&gt;&lt;/div&gt;
    53                        &lt;div class=&quot;col td_vesseltype&quot;&gt;&lt;img src=&quot;/icons/icon7_511.png&quot;&gt;&lt;span class=&quot;padding_18&quot;&gt;&lt;a href=&quot;/vessels/wilson-blyth-mmsi-314544000-imo-9124419&quot;&gt;WILSON BLYTH&lt;/a&gt; [GB]&lt;/span&gt;&lt;/div&gt;
    54                    &lt;/div&gt;
    55                &lt;/div&gt;
    56                                    &lt;div class=&quot;table-group&quot;&gt;
    57                    &lt;div class=&quot;table-row&quot;&gt;
    58                        &lt;div class=&quot;col&quot;&gt;&lt;i class=&quot;fa fa-flag-checkered green&quot;&gt;&lt;/i&gt;&lt;/div&gt;
    59                        &lt;div class=&quot;col&quot;&gt;Arrival&lt;/div&gt;
    60                        &lt;div class=&quot;col&quot; style=&quot;text-align: center;&quot;&gt;2022-02-14 &lt;b&gt;16:51&lt;/b&gt;&lt;/div&gt;
    61                        &lt;div class=&quot;col&quot;&gt;&lt;img class=&quot;flag_line tooltip&quot; src=&quot;/icons/flags2/16/GB.png&quot; title=&quot; United Kingdom&quot;/&gt;&lt;a href=&quot;/ports/port-of-hunters-quay-in-gb-united-kingdom-id-218&quot;&gt;HUNTERS QUAY&lt;/a&gt;&lt;/div&gt;
    62                        &lt;div class=&quot;col td_vesseltype&quot;&gt;&lt;img src=&quot;/icons/icon6_511.png&quot;&gt;&lt;span class=&quot;padding_18&quot;&gt;&lt;a href=&quot;/vessels/sound-of-soay-mmsi-235101063-imo-9665229&quot;&gt;SOUND OF SOAY&lt;/a&gt; [GB]&lt;/span&gt;&lt;/div&gt;
    63                    &lt;/div&gt;
    64                &lt;/div&gt;
    65                                    &lt;div class=&quot;table-group&quot;&gt;
    66                    &lt;div class=&quot;table-row&quot;&gt;
    67                        &lt;div class=&quot;col&quot;&gt;&lt;i class=&quot;fa fa-sign-out red&quot;&gt;&lt;/i&gt;&lt;/div&gt;
    68                        &lt;div class=&quot;col&quot;&gt;Departure&lt;/div&gt;
    69                        &lt;div class=&quot;col&quot; style=&quot;text-align: center;&quot;&gt;2022-02-14 &lt;b&gt;16:51&lt;/b&gt;&lt;/div&gt;
    70                        &lt;div class=&quot;col&quot;&gt;&lt;img class=&quot;flag_line tooltip&quot; src=&quot;/icons/flags2/16/GB.png&quot; title=&quot; United Kingdom&quot;/&gt;&lt;a href=&quot;/ports/port-of-largs-in-gb-united-kingdom-id-1602&quot;&gt;LARGS&lt;/a&gt;&lt;/div&gt;
    71                        &lt;div class=&quot;col td_vesseltype&quot;&gt;&lt;img src=&quot;/icons/icon6_511.png&quot;&gt;&lt;span class=&quot;padding_18&quot;&gt;&lt;a href=&quot;/vessels/loch-shira-mmsi-235053239-imo-9376919&quot;&gt;LOCH SHIRA&lt;/a&gt; [GB]&lt;/span&gt;&lt;/div&gt;
    72                    &lt;/div&gt;
    73                &lt;/div&gt;
    74                                    &lt;div class=&quot;table-group&quot;&gt;
    75                    &lt;div class=&quot;table-row&quot;&gt;
    76                        &lt;div class=&quot;col&quot;&gt;&lt;i class=&quot;fa fa-sign-out red&quot;&gt;&lt;/i&gt;&lt;/div&gt;
    77                        &lt;div class=&quot;col&quot;&gt;Departure&lt;/div&gt;
    78                        &lt;div class=&quot;col&quot; style=&quot;text-align: center;&quot;&gt;2022-02-14 &lt;b&gt;16:51&lt;/b&gt;&lt;/div&gt;
    79                        &lt;div class=&quot;col&quot;&gt;&lt;img class=&quot;flag_line tooltip&quot; src=&quot;/icons/flags2/16/GB.png&quot; title=&quot; United Kingdom&quot;/&gt;&lt;a href=&quot;/ports/port-of-ryde-in-gb-united-kingdom-id-1629&quot;&gt;RYDE&lt;/a&gt;&lt;/div&gt;
    80                        &lt;div class=&quot;col td_vesseltype&quot;&gt;&lt;img src=&quot;/icons/icon4_511.png&quot;&gt;&lt;span class=&quot;padding_18&quot;&gt;&lt;a href=&quot;/vessels/island-flyer-mmsi-235117772-imo-9737797&quot;&gt;ISLAND FLYER&lt;/a&gt; [GB]&lt;/span&gt;&lt;/div&gt;
    81                    &lt;/div&gt;
    82                &lt;/div&gt;
    83Element table = doc.select(&quot;.cs-table&quot;).first();
    84Elements rows = doc.select(&quot;div.table-row&quot;);
    85for (Element row : rows) {
    86        String event = row.select(&quot;div.col:nth-of-type(2)&quot;).text();
    87        String time = row.select(&quot;div.col:nth-of-type(3)&quot;).text();
    88        String port = row.select(&quot;div.col:nth-of-type(4)&quot;).text();
    89        String vessel = row.select(&quot;.td_vesseltype.col&quot;).text();
    90        System.out.println(event + &quot;-&quot; + time + &quot; &quot; + port + &quot; &quot; + vessel);
    91        System.out.println(&quot;---------------------------&quot;);
    92        // Do stuff with data here
    93    }
    94

    Now it's up to you to decide if you want to keep the data in some array/list inside the loop and use it later, or to insert it directly to your database.

    Source https://stackoverflow.com/questions/71116068

    QUESTION

    chrome extension: Uncaught TypeError: Cannot read properties of undefined (reading 'onClicked')

    Asked 2022-Jan-25 at 09:51

    I have been creating a chrome extension that should run a certain script(index.js) on a particular tab on extension click.

    service_worker.js

    1// action on extension click
    2chrome.browserAction.onClicked.addListener(function (tab) {
    3    chrome.tabs.executeScript({
    4        tabId: tab.id,
    5    }, { file: &quot;index.js&quot; });
    6
    7});
    8

    I have also tried

    1// action on extension click
    2chrome.browserAction.onClicked.addListener(function (tab) {
    3    chrome.tabs.executeScript({
    4        tabId: tab.id,
    5    }, { file: &quot;index.js&quot; });
    6
    7});
    8chrome.action......
    9

    and

    1// action on extension click
    2chrome.browserAction.onClicked.addListener(function (tab) {
    3    chrome.tabs.executeScript({
    4        tabId: tab.id,
    5    }, { file: &quot;index.js&quot; });
    6
    7});
    8chrome.action......
    9browser....
    10

    But nothing works, I am using manifest v3.

    manifest.json

    1// action on extension click
    2chrome.browserAction.onClicked.addListener(function (tab) {
    3    chrome.tabs.executeScript({
    4        tabId: tab.id,
    5    }, { file: &quot;index.js&quot; });
    6
    7});
    8chrome.action......
    9browser....
    10{
    11    &quot;name&quot;: &quot;Meet scraper&quot;,
    12    &quot;version&quot;: &quot;0.1&quot;,
    13    &quot;author&quot;: &quot;Naveenkumar M&quot;,
    14    &quot;description&quot;: &quot;Scrapes meet data from meetup.com&quot;,
    15    &quot;manifest_version&quot;: 3,
    16    &quot;permissions&quot;: [
    17        &quot;activeTab&quot;,
    18        &quot;tabs&quot;
    19    ],
    20    &quot;background&quot;: {
    21        &quot;service_worker&quot;: &quot;service_worker.js&quot;
    22    }
    23}
    24

    And my index.js file is

    1// action on extension click
    2chrome.browserAction.onClicked.addListener(function (tab) {
    3    chrome.tabs.executeScript({
    4        tabId: tab.id,
    5    }, { file: &quot;index.js&quot; });
    6
    7});
    8chrome.action......
    9browser....
    10{
    11    &quot;name&quot;: &quot;Meet scraper&quot;,
    12    &quot;version&quot;: &quot;0.1&quot;,
    13    &quot;author&quot;: &quot;Naveenkumar M&quot;,
    14    &quot;description&quot;: &quot;Scrapes meet data from meetup.com&quot;,
    15    &quot;manifest_version&quot;: 3,
    16    &quot;permissions&quot;: [
    17        &quot;activeTab&quot;,
    18        &quot;tabs&quot;
    19    ],
    20    &quot;background&quot;: {
    21        &quot;service_worker&quot;: &quot;service_worker.js&quot;
    22    }
    23}
    24console.log(&quot;Hello world&quot;)
    25

    I got the error enter image description here

    correct me if I am wrong

    ANSWER

    Answered 2022-Jan-25 at 05:00

    Manifest v2

    The following keys must be declared in the manifest to use this API.

    browser_action

    check this link for more details

    https://developer.chrome.com/docs/extensions/reference/browserAction/

    Update 1 :

    Manifest v3

    you need to add actions inside your manifest file

    1// action on extension click
    2chrome.browserAction.onClicked.addListener(function (tab) {
    3    chrome.tabs.executeScript({
    4        tabId: tab.id,
    5    }, { file: &quot;index.js&quot; });
    6
    7});
    8chrome.action......
    9browser....
    10{
    11    &quot;name&quot;: &quot;Meet scraper&quot;,
    12    &quot;version&quot;: &quot;0.1&quot;,
    13    &quot;author&quot;: &quot;Naveenkumar M&quot;,
    14    &quot;description&quot;: &quot;Scrapes meet data from meetup.com&quot;,
    15    &quot;manifest_version&quot;: 3,
    16    &quot;permissions&quot;: [
    17        &quot;activeTab&quot;,
    18        &quot;tabs&quot;
    19    ],
    20    &quot;background&quot;: {
    21        &quot;service_worker&quot;: &quot;service_worker.js&quot;
    22    }
    23}
    24console.log(&quot;Hello world&quot;)
    25{
    26  &quot;action&quot;: { … }
    27}
    28

    and then you can call it like this

    1// action on extension click
    2chrome.browserAction.onClicked.addListener(function (tab) {
    3    chrome.tabs.executeScript({
    4        tabId: tab.id,
    5    }, { file: &quot;index.js&quot; });
    6
    7});
    8chrome.action......
    9browser....
    10{
    11    &quot;name&quot;: &quot;Meet scraper&quot;,
    12    &quot;version&quot;: &quot;0.1&quot;,
    13    &quot;author&quot;: &quot;Naveenkumar M&quot;,
    14    &quot;description&quot;: &quot;Scrapes meet data from meetup.com&quot;,
    15    &quot;manifest_version&quot;: 3,
    16    &quot;permissions&quot;: [
    17        &quot;activeTab&quot;,
    18        &quot;tabs&quot;
    19    ],
    20    &quot;background&quot;: {
    21        &quot;service_worker&quot;: &quot;service_worker.js&quot;
    22    }
    23}
    24console.log(&quot;Hello world&quot;)
    25{
    26  &quot;action&quot;: { … }
    27}
    28chrome.action.onClicked.addListener(tab =&gt; { … });
    29

    Source https://stackoverflow.com/questions/70843290

    QUESTION

    How to merge data from object A into object B in Python?

    Asked 2022-Jan-17 at 10:09

    I'm trying to figure out if there's a procedural way to merge data from object A to object B without manually setting it up.

    For example, I have the following pydantic model which represents results of an API call to The Movie Database:

    1class PersonScraperReply(BaseModel):
    2    &quot;&quot;&quot;Represents a Person Scraper Reply&quot;&quot;&quot;
    3
    4    scraper_name: str
    5    &quot;&quot;&quot;Name of the scraper used to scrape this data&quot;&quot;&quot;
    6
    7    local_person_id: int
    8    &quot;&quot;&quot;Id of person in local database&quot;&quot;&quot;
    9
    10    local_person_name: str
    11    &quot;&quot;&quot;name of person in local database&quot;&quot;&quot;
    12
    13    aliases: Optional[list[str]] = None
    14    &quot;&quot;&quot;list of strings that represent the person's aliases obtained from scraper&quot;&quot;&quot;
    15
    16    description: Optional[str] = None
    17    &quot;&quot;&quot;String description of the person obtained from scraper&quot;&quot;&quot;
    18
    19    date_of_birth: Optional[date] = None
    20    &quot;&quot;&quot;Date of birth of the person obtained from scraper&quot;&quot;&quot;
    21
    22    date_of_death: Optional[date] = None
    23    &quot;&quot;&quot;Date the person passed away obtained from scraper&quot;&quot;&quot;
    24
    25    gender: Optional[GenderEnum] = None
    26    &quot;&quot;&quot;Gender of the person obtained from scraper&quot;&quot;&quot;
    27
    28    homepage: Optional[str] = None
    29    &quot;&quot;&quot;Person's official homepage obtained from scraper&quot;&quot;&quot;
    30
    31    place_of_birth: Optional[str] = None
    32    &quot;&quot;&quot;Location where the person wsa born obtained from scraper&quot;&quot;&quot;
    33
    34    profile_image_url: Optional[str] = None
    35    &quot;&quot;&quot;Url for person's profile image obtained from scraper&quot;&quot;&quot;
    36
    37    additional_images: Optional[list[str]] = None
    38    &quot;&quot;&quot;List of urls for additional images for the person obtained from scraper&quot;&quot;&quot;
    39
    40    scrape_status: ScrapeStatus
    41    &quot;&quot;&quot;status of scraping. Success or failure&quot;&quot;&quot;
    42

    I also have this SQLAlchemy class that represents a person in my database:

    1class PersonScraperReply(BaseModel):
    2    &quot;&quot;&quot;Represents a Person Scraper Reply&quot;&quot;&quot;
    3
    4    scraper_name: str
    5    &quot;&quot;&quot;Name of the scraper used to scrape this data&quot;&quot;&quot;
    6
    7    local_person_id: int
    8    &quot;&quot;&quot;Id of person in local database&quot;&quot;&quot;
    9
    10    local_person_name: str
    11    &quot;&quot;&quot;name of person in local database&quot;&quot;&quot;
    12
    13    aliases: Optional[list[str]] = None
    14    &quot;&quot;&quot;list of strings that represent the person's aliases obtained from scraper&quot;&quot;&quot;
    15
    16    description: Optional[str] = None
    17    &quot;&quot;&quot;String description of the person obtained from scraper&quot;&quot;&quot;
    18
    19    date_of_birth: Optional[date] = None
    20    &quot;&quot;&quot;Date of birth of the person obtained from scraper&quot;&quot;&quot;
    21
    22    date_of_death: Optional[date] = None
    23    &quot;&quot;&quot;Date the person passed away obtained from scraper&quot;&quot;&quot;
    24
    25    gender: Optional[GenderEnum] = None
    26    &quot;&quot;&quot;Gender of the person obtained from scraper&quot;&quot;&quot;
    27
    28    homepage: Optional[str] = None
    29    &quot;&quot;&quot;Person's official homepage obtained from scraper&quot;&quot;&quot;
    30
    31    place_of_birth: Optional[str] = None
    32    &quot;&quot;&quot;Location where the person wsa born obtained from scraper&quot;&quot;&quot;
    33
    34    profile_image_url: Optional[str] = None
    35    &quot;&quot;&quot;Url for person's profile image obtained from scraper&quot;&quot;&quot;
    36
    37    additional_images: Optional[list[str]] = None
    38    &quot;&quot;&quot;List of urls for additional images for the person obtained from scraper&quot;&quot;&quot;
    39
    40    scrape_status: ScrapeStatus
    41    &quot;&quot;&quot;status of scraping. Success or failure&quot;&quot;&quot;
    42class PersonInDatabase(Base):
    43
    44    id: int
    45    &quot;&quot;&quot;Person Id&quot;&quot;&quot;
    46
    47    name: str
    48    &quot;&quot;&quot;Person Name&quot;&quot;&quot;
    49    
    50    description: str = Column(String)
    51    &quot;&quot;&quot;Description of the person&quot;&quot;&quot;
    52
    53    gender: GenderEnum = Column(Enum(GenderEnum), nullable=False, default=GenderEnum.unspecified)
    54    &quot;&quot;&quot;Person's gender, 0=unspecified, 1=male, 2=female, 3=non-binary&quot;&quot;&quot;
    55
    56    tmdb_id: int = Column(Integer)
    57    &quot;&quot;&quot;Tmdb id&quot;&quot;&quot;
    58
    59    imdb_id: str = Column(String)
    60    &quot;&quot;&quot;IMDB id, in the format of nn[alphanumeric id]&quot;&quot;&quot;
    61
    62    place_of_birth: str = Column(String)
    63    &quot;&quot;&quot;Place of person's birth&quot;&quot;&quot;
    64
    65    # dates
    66    date_of_birth: DateTime = Column(DateTime)
    67    &quot;&quot;&quot;Date the person was born&quot;&quot;&quot;
    68
    69    date_of_death: DateTime = Column(DateTime)
    70    &quot;&quot;&quot;Date the person passed away&quot;&quot;&quot;
    71
    72    date_last_person_scrape: DateTime = Column(DateTime)
    73    &quot;&quot;&quot;Date last time the person was scraped&quot;&quot;&quot;
    74
    75

    My goal is to merge the data I received from the API call to the database object. When I say merge I mean assign fields that exist in both objects and do nothing with the rest. Something along the lines of:

    1class PersonScraperReply(BaseModel):
    2    &quot;&quot;&quot;Represents a Person Scraper Reply&quot;&quot;&quot;
    3
    4    scraper_name: str
    5    &quot;&quot;&quot;Name of the scraper used to scrape this data&quot;&quot;&quot;
    6
    7    local_person_id: int
    8    &quot;&quot;&quot;Id of person in local database&quot;&quot;&quot;
    9
    10    local_person_name: str
    11    &quot;&quot;&quot;name of person in local database&quot;&quot;&quot;
    12
    13    aliases: Optional[list[str]] = None
    14    &quot;&quot;&quot;list of strings that represent the person's aliases obtained from scraper&quot;&quot;&quot;
    15
    16    description: Optional[str] = None
    17    &quot;&quot;&quot;String description of the person obtained from scraper&quot;&quot;&quot;
    18
    19    date_of_birth: Optional[date] = None
    20    &quot;&quot;&quot;Date of birth of the person obtained from scraper&quot;&quot;&quot;
    21
    22    date_of_death: Optional[date] = None
    23    &quot;&quot;&quot;Date the person passed away obtained from scraper&quot;&quot;&quot;
    24
    25    gender: Optional[GenderEnum] = None
    26    &quot;&quot;&quot;Gender of the person obtained from scraper&quot;&quot;&quot;
    27
    28    homepage: Optional[str] = None
    29    &quot;&quot;&quot;Person's official homepage obtained from scraper&quot;&quot;&quot;
    30
    31    place_of_birth: Optional[str] = None
    32    &quot;&quot;&quot;Location where the person wsa born obtained from scraper&quot;&quot;&quot;
    33
    34    profile_image_url: Optional[str] = None
    35    &quot;&quot;&quot;Url for person's profile image obtained from scraper&quot;&quot;&quot;
    36
    37    additional_images: Optional[list[str]] = None
    38    &quot;&quot;&quot;List of urls for additional images for the person obtained from scraper&quot;&quot;&quot;
    39
    40    scrape_status: ScrapeStatus
    41    &quot;&quot;&quot;status of scraping. Success or failure&quot;&quot;&quot;
    42class PersonInDatabase(Base):
    43
    44    id: int
    45    &quot;&quot;&quot;Person Id&quot;&quot;&quot;
    46
    47    name: str
    48    &quot;&quot;&quot;Person Name&quot;&quot;&quot;
    49    
    50    description: str = Column(String)
    51    &quot;&quot;&quot;Description of the person&quot;&quot;&quot;
    52
    53    gender: GenderEnum = Column(Enum(GenderEnum), nullable=False, default=GenderEnum.unspecified)
    54    &quot;&quot;&quot;Person's gender, 0=unspecified, 1=male, 2=female, 3=non-binary&quot;&quot;&quot;
    55
    56    tmdb_id: int = Column(Integer)
    57    &quot;&quot;&quot;Tmdb id&quot;&quot;&quot;
    58
    59    imdb_id: str = Column(String)
    60    &quot;&quot;&quot;IMDB id, in the format of nn[alphanumeric id]&quot;&quot;&quot;
    61
    62    place_of_birth: str = Column(String)
    63    &quot;&quot;&quot;Place of person's birth&quot;&quot;&quot;
    64
    65    # dates
    66    date_of_birth: DateTime = Column(DateTime)
    67    &quot;&quot;&quot;Date the person was born&quot;&quot;&quot;
    68
    69    date_of_death: DateTime = Column(DateTime)
    70    &quot;&quot;&quot;Date the person passed away&quot;&quot;&quot;
    71
    72    date_last_person_scrape: DateTime = Column(DateTime)
    73    &quot;&quot;&quot;Date last time the person was scraped&quot;&quot;&quot;
    74
    75person_scrape_reply = PersonScraperReply()
    76person_in_db = PersonInDatabase()
    77
    78
    79for field_in_API_name, field_in_API_value in person_scrape_reply.fields: #for field in API response
    80    if field_in_API_name in person_in_db.field_names and field_in_API_value is not None: #if field exists in PersonInDatabase and the value is not none
    81        person_in_db.fields[field_in_API_name] = field_in_API_value #assign API response value to field in database class.
    82
    83

    Is something like this possible?

    ANSWER

    Answered 2022-Jan-17 at 08:23

    use the attrs package.

    1class PersonScraperReply(BaseModel):
    2    &quot;&quot;&quot;Represents a Person Scraper Reply&quot;&quot;&quot;
    3
    4    scraper_name: str
    5    &quot;&quot;&quot;Name of the scraper used to scrape this data&quot;&quot;&quot;
    6
    7    local_person_id: int
    8    &quot;&quot;&quot;Id of person in local database&quot;&quot;&quot;
    9
    10    local_person_name: str
    11    &quot;&quot;&quot;name of person in local database&quot;&quot;&quot;
    12
    13    aliases: Optional[list[str]] = None
    14    &quot;&quot;&quot;list of strings that represent the person's aliases obtained from scraper&quot;&quot;&quot;
    15
    16    description: Optional[str] = None
    17    &quot;&quot;&quot;String description of the person obtained from scraper&quot;&quot;&quot;
    18
    19    date_of_birth: Optional[date] = None
    20    &quot;&quot;&quot;Date of birth of the person obtained from scraper&quot;&quot;&quot;
    21
    22    date_of_death: Optional[date] = None
    23    &quot;&quot;&quot;Date the person passed away obtained from scraper&quot;&quot;&quot;
    24
    25    gender: Optional[GenderEnum] = None
    26    &quot;&quot;&quot;Gender of the person obtained from scraper&quot;&quot;&quot;
    27
    28    homepage: Optional[str] = None
    29    &quot;&quot;&quot;Person's official homepage obtained from scraper&quot;&quot;&quot;
    30
    31    place_of_birth: Optional[str] = None
    32    &quot;&quot;&quot;Location where the person wsa born obtained from scraper&quot;&quot;&quot;
    33
    34    profile_image_url: Optional[str] = None
    35    &quot;&quot;&quot;Url for person's profile image obtained from scraper&quot;&quot;&quot;
    36
    37    additional_images: Optional[list[str]] = None
    38    &quot;&quot;&quot;List of urls for additional images for the person obtained from scraper&quot;&quot;&quot;
    39
    40    scrape_status: ScrapeStatus
    41    &quot;&quot;&quot;status of scraping. Success or failure&quot;&quot;&quot;
    42class PersonInDatabase(Base):
    43
    44    id: int
    45    &quot;&quot;&quot;Person Id&quot;&quot;&quot;
    46
    47    name: str
    48    &quot;&quot;&quot;Person Name&quot;&quot;&quot;
    49    
    50    description: str = Column(String)
    51    &quot;&quot;&quot;Description of the person&quot;&quot;&quot;
    52
    53    gender: GenderEnum = Column(Enum(GenderEnum), nullable=False, default=GenderEnum.unspecified)
    54    &quot;&quot;&quot;Person's gender, 0=unspecified, 1=male, 2=female, 3=non-binary&quot;&quot;&quot;
    55
    56    tmdb_id: int = Column(Integer)
    57    &quot;&quot;&quot;Tmdb id&quot;&quot;&quot;
    58
    59    imdb_id: str = Column(String)
    60    &quot;&quot;&quot;IMDB id, in the format of nn[alphanumeric id]&quot;&quot;&quot;
    61
    62    place_of_birth: str = Column(String)
    63    &quot;&quot;&quot;Place of person's birth&quot;&quot;&quot;
    64
    65    # dates
    66    date_of_birth: DateTime = Column(DateTime)
    67    &quot;&quot;&quot;Date the person was born&quot;&quot;&quot;
    68
    69    date_of_death: DateTime = Column(DateTime)
    70    &quot;&quot;&quot;Date the person passed away&quot;&quot;&quot;
    71
    72    date_last_person_scrape: DateTime = Column(DateTime)
    73    &quot;&quot;&quot;Date last time the person was scraped&quot;&quot;&quot;
    74
    75person_scrape_reply = PersonScraperReply()
    76person_in_db = PersonInDatabase()
    77
    78
    79for field_in_API_name, field_in_API_value in person_scrape_reply.fields: #for field in API response
    80    if field_in_API_name in person_in_db.field_names and field_in_API_value is not None: #if field exists in PersonInDatabase and the value is not none
    81        person_in_db.fields[field_in_API_name] = field_in_API_value #assign API response value to field in database class.
    82
    83from attrs import define, asdict
    84
    85@define
    86class PersonScraperReply(BaseModel):
    87    &quot;&quot;&quot;Represents a Person Scraper Reply&quot;&quot;&quot;
    88
    89    scraper_name: str
    90    &quot;&quot;&quot;Name of the scraper used to scrape this data&quot;&quot;&quot;
    91
    92    local_person_id: int
    93    &quot;&quot;&quot;Id of person in local database&quot;&quot;&quot;
    94
    95    local_person_name: str
    96    &quot;&quot;&quot;name of person in local database&quot;&quot;&quot;
    97
    98    aliases: Optional[list[str]] = None
    99    &quot;&quot;&quot;list of strings that represent the person's aliases obtained from scraper&quot;&quot;&quot;
    100
    101    description: Optional[str] = None
    102    &quot;&quot;&quot;String description of the person obtained from scraper&quot;&quot;&quot;
    103
    104    date_of_birth: Optional[date] = None
    105    &quot;&quot;&quot;Date of birth of the person obtained from scraper&quot;&quot;&quot;
    106
    107    date_of_death: Optional[date] = None
    108    &quot;&quot;&quot;Date the person passed away obtained from scraper&quot;&quot;&quot;
    109
    110    gender: Optional[GenderEnum] = None
    111    &quot;&quot;&quot;Gender of the person obtained from scraper&quot;&quot;&quot;
    112
    113    homepage: Optional[str] = None
    114    &quot;&quot;&quot;Person's official homepage obtained from scraper&quot;&quot;&quot;
    115
    116    place_of_birth: Optional[str] = None
    117    &quot;&quot;&quot;Location where the person wsa born obtained from scraper&quot;&quot;&quot;
    118
    119    profile_image_url: Optional[str] = None
    120    &quot;&quot;&quot;Url for person's profile image obtained from scraper&quot;&quot;&quot;
    121
    122    additional_images: Optional[list[str]] = None
    123    &quot;&quot;&quot;List of urls for additional images for the person obtained from scraper&quot;&quot;&quot;
    124
    125    scrape_status: ScrapeStatus
    126    &quot;&quot;&quot;status of scraping. Success or failure&quot;&quot;&quot;
    127
    128@define
    129class PersonInDatabase(Base):
    130
    131    id: int
    132    &quot;&quot;&quot;Person Id&quot;&quot;&quot;
    133
    134    name: str
    135    &quot;&quot;&quot;Person Name&quot;&quot;&quot;
    136    
    137    description: str = Column(String)
    138    &quot;&quot;&quot;Description of the person&quot;&quot;&quot;
    139
    140    gender: GenderEnum = Column(Enum(GenderEnum), nullable=False, default=GenderEnum.unspecified)
    141    &quot;&quot;&quot;Person's gender, 0=unspecified, 1=male, 2=female, 3=non-binary&quot;&quot;&quot;
    142
    143    tmdb_id: int = Column(Integer)
    144    &quot;&quot;&quot;Tmdb id&quot;&quot;&quot;
    145
    146    imdb_id: str = Column(String)
    147    &quot;&quot;&quot;IMDB id, in the format of nn[alphanumeric id]&quot;&quot;&quot;
    148
    149    place_of_birth: str = Column(String)
    150    &quot;&quot;&quot;Place of person's birth&quot;&quot;&quot;
    151
    152    # dates
    153    date_of_birth: DateTime = Column(DateTime)
    154    &quot;&quot;&quot;Date the person was born&quot;&quot;&quot;
    155
    156    date_of_death: DateTime = Column(DateTime)
    157    &quot;&quot;&quot;Date the person passed away&quot;&quot;&quot;
    158
    159    date_last_person_scrape: DateTime = Column(DateTime)
    160    &quot;&quot;&quot;Date last time the person was scraped&quot;&quot;&quot;
    161
    162
    163person_scrape_reply = PersonScraperReply()
    164person_in_db = PersonInDatabase()
    165scrape_asdict = asdict(person_scrape_reply)
    166db_asdict = asdict(person_in_db)
    167
    168for field_in_API_name, field_in_API_value in scrape_asdict.items(): #for field in API response
    169    if field_in_API_name in db_asdict.keys() and field_in_API_value is not None: #if field exists in PersonInDatabase and the value is not none
    170        setattr(person_in_db, field_in_API_name, field_in_API_value) #assign API response value to field in database class.
    171

    Source https://stackoverflow.com/questions/70731264

    QUESTION

    Using pod Anti Affinity to force only 1 pod per node

    Asked 2022-Jan-01 at 12:50

    I am trying to get my deployment to only deploy replicas to nodes that aren't running rabbitmq (this is working) and also doesn't already have the pod I am deploying (not working).

    I can't seem to get this to work. For example, if I have 3 nodes (2 with label of app.kubernetes.io/part-of=rabbitmq) then all 2 replicas get deployed to the remaining node. It is like the deployments aren't taking into account their own pods it creates in determining anti-affinity. My desired state is for it to only deploy 1 pod and the other one should not get scheduled.

    1kind: Deployment
    2metadata:
    3  name: test-scraper
    4  namespace: scrapers
    5  labels:
    6    k8s-app: test-scraper-deployment
    7spec:
    8  replicas: 2
    9  selector:
    10    matchLabels:
    11      app: testscraper
    12  template:
    13    metadata:
    14      labels:
    15        app: testscraper
    16    spec:
    17      affinity:
    18        podAntiAffinity:
    19          requiredDuringSchedulingIgnoredDuringExecution:
    20          - labelSelector:
    21              matchExpressions:
    22              - key: app.kubernetes.io/part-of
    23                operator: In
    24                values:
    25                - rabbitmq
    26              - key: app
    27                operator: In
    28                values:
    29                - testscraper
    30            namespaces: [scrapers, rabbitmq]
    31            topologyKey: &quot;kubernetes.io/hostname&quot;
    32      containers:
    33        - name: test-scraper
    34          image: #######:latest```
    35

    ANSWER

    Answered 2022-Jan-01 at 12:50

    I think Thats because of the matchExpressions part of your manifest , where it requires pods need to have both the labels app.kubernetes.io/part-of: rabbitmq and app: testscraper to satisfy the antiaffinity rule.

    Based on deployment yaml you have provided , these pods will have only app: testscraper but NOT pp.kubernetes.io/part-of: rabbitmq hence both the replicas are getting scheduled on same node

    from Documentation (The requirements are ANDed.):

    1kind: Deployment
    2metadata:
    3  name: test-scraper
    4  namespace: scrapers
    5  labels:
    6    k8s-app: test-scraper-deployment
    7spec:
    8  replicas: 2
    9  selector:
    10    matchLabels:
    11      app: testscraper
    12  template:
    13    metadata:
    14      labels:
    15        app: testscraper
    16    spec:
    17      affinity:
    18        podAntiAffinity:
    19          requiredDuringSchedulingIgnoredDuringExecution:
    20          - labelSelector:
    21              matchExpressions:
    22              - key: app.kubernetes.io/part-of
    23                operator: In
    24                values:
    25                - rabbitmq
    26              - key: app
    27                operator: In
    28                values:
    29                - testscraper
    30            namespaces: [scrapers, rabbitmq]
    31            topologyKey: &quot;kubernetes.io/hostname&quot;
    32      containers:
    33        - name: test-scraper
    34          image: #######:latest```
    35kubectl explain pod.spec.affinity.podAntiAffinity.requiredDuringSchedulingIgnoredDuringExecution.labelSelector
    36
    37...
    38FIELDS:
    39   matchExpressions     &lt;[]Object&gt;
    40     matchExpressions is a list of label selector requirements.
    41     **The requirements are ANDed.**
    42

    Source https://stackoverflow.com/questions/70547587

    QUESTION

    Unable to connect to the server: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it. -Microk8s

    Asked 2021-Dec-27 at 08:21

    When i do this command kubectl get pods --all-namespaces I get this Unable to connect to the server: dial tcp [::1]:8080: connectex: No connection could be made because the target machine actively refused it.

    All of my pods are running and ready 1/1, but when I use this microk8s kubectl get service -n kube-system I get

    1kubernetes-dashboard        ClusterIP   10.152.183.132   &lt;none&gt;        443/TCP    6h13m
    2dashboard-metrics-scraper   ClusterIP   10.152.183.10    &lt;none&gt;        8000/TCP   6h13m
    3

    I am missing kube-dns even tho dns is enabled. Also when I type this for proxy for all ip addresses microk8s kubectl proxy --accept-hosts=.* --address=0.0.0.0 & I only get this Starting to serve on [::]:8001 and I am missing [1]84623 for example.

    I am using microk8s and multipass with Hyper-V Manager on Windows, and I can't go to dashboard on the net. I am also a beginner, this is for my college paper. I saw something similar online but it was for Azure.

    ANSWER

    Answered 2021-Dec-27 at 08:21

    Posting answer from comments for better visibility: Problem solved by reinstalling multipass and microk8s. Now it works.

    Source https://stackoverflow.com/questions/70489608

    QUESTION

    Reading Excel file Using PySpark: Failed to find data source: com.crealytics.spark.excel

    Asked 2021-Dec-26 at 06:00

    I'm trying to read an excel file with spark using jupyter in vscode,with java version of 1.8.0_311 (Oracle Corporation), and scala version of version 2.12.15.

    Here is the code below:

    1# import necessary library 
    2import pandas as pd 
    3from pyspark.sql.types import StructType
    4
    5# entry point for spark's functionality 
    6from pyspark import SparkContext, SparkConf, SQLContext 
    7    
    8configure = SparkConf().setAppName(&quot;name&quot;).setMaster(&quot;local&quot;)
    9sc = SparkContext(conf= configure)
    10sql = SQLContext(sc)
    11
    12# entry point for spark's dataframes
    13from pyspark.sql import SparkSession
    14
    15spark = SparkSession \
    16    .builder \
    17    .master(&quot;local&quot;) \
    18    .appName(&quot;pharmacy scraper&quot;) \
    19    .config(&quot;spark.jars.packages&quot;, &quot;com.crealytics:spark-excel_2.11:0.12.2&quot;) \
    20    .getOrCreate()
    21
    22# reading excel file 
    23df_generika = spark.read.format(&quot;com.crealytics.spark.excel&quot;).option(&quot;useHeader&quot;, &quot;true&quot;).option(&quot;inferSchema&quot;, &quot;true&quot;).option(&quot;dataAddress&quot;, &quot;Sheet1&quot;).load(&quot;./../data/raw-data/generika.xlsx&quot;)
    24

    Unfortunately, it produces an error

    1# import necessary library 
    2import pandas as pd 
    3from pyspark.sql.types import StructType
    4
    5# entry point for spark's functionality 
    6from pyspark import SparkContext, SparkConf, SQLContext 
    7    
    8configure = SparkConf().setAppName(&quot;name&quot;).setMaster(&quot;local&quot;)
    9sc = SparkContext(conf= configure)
    10sql = SQLContext(sc)
    11
    12# entry point for spark's dataframes
    13from pyspark.sql import SparkSession
    14
    15spark = SparkSession \
    16    .builder \
    17    .master(&quot;local&quot;) \
    18    .appName(&quot;pharmacy scraper&quot;) \
    19    .config(&quot;spark.jars.packages&quot;, &quot;com.crealytics:spark-excel_2.11:0.12.2&quot;) \
    20    .getOrCreate()
    21
    22# reading excel file 
    23df_generika = spark.read.format(&quot;com.crealytics.spark.excel&quot;).option(&quot;useHeader&quot;, &quot;true&quot;).option(&quot;inferSchema&quot;, &quot;true&quot;).option(&quot;dataAddress&quot;, &quot;Sheet1&quot;).load(&quot;./../data/raw-data/generika.xlsx&quot;)
    24Py4JJavaError: An error occurred while calling o36.load.
    25: java.lang.ClassNotFoundException: 
    26Failed to find data source: com.crealytics.spark.excel. Please find packages at
    27http://spark.apache.org/third-party-projects.html
    28

    ANSWER

    Answered 2021-Dec-24 at 12:11

    Check your Classpath: you must have the Jar containing com.crealytics.spark.excel in it.

    With Spark, the architecture is a bit different than traditional applications. You may need to have the Jar at different location: in your application, at the master level, and/or worker level. Ingestion (what you’re doing) is done by the worker, so make sure they have this Jar in their classpath.

    Source https://stackoverflow.com/questions/70468254

    Community Discussions contain sources that include Stack Exchange Network

    Tutorials and Learning Resources in Scraper

    Tutorials and Learning Resources are not available at this moment for Scraper

    Share this Page

    share link

    Get latest updates on Scraper