Speech processing is the study of speech signals and the processing methods of signals. The signals are usually processed in a digital representation, so speech processing can be regarded as a special case of digital signal processing, applied to speech signals. Aspects of speech processing includes the acquisition, manipulation, storage, transfer and output of speech signals. The input is called speech recognition and the output is called speech synthesis.
Popular New Releases in Speech
DeepSpeech
v0.10.0-alpha.3
leon
1.0.0-beta.6 / Leon Over HTTP + Making Friends with Coqui STT
wav2letter
v0.2 (pre Flashlight-consolidation)
pydub
v0.25.1
TTS
TTS v0.0.9 (first release)
Popular Libraries in Speech
by CorentinJ python
32619 NOASSERTION
Clone a voice in 5 seconds to generate arbitrary speech in real-time
by mozilla c++
18003 MPL-2.0
DeepSpeech is an open source embedded (offline, on-device) speech-to-text engine which can run in real time on devices ranging from a Raspberry Pi 4 to high power GPU servers.
by kaldi-asr shell
10821 NOASSERTION
kaldi-asr/kaldi is the official location of the Kaldi project.
by zealdocs c++
8973 GPL-3.0
Offline documentation browser inspired by Dash
by leon-ai javascript
8625 MIT
🧠 Leon is your open-source personal assistant.
by TalAter javascript
6145 MIT
:speech_balloon: Speech recognition for your site
by flashlight c++
5927 NOASSERTION
Facebook AI Research's Automatic Speech Recognition Toolkit
by Uberi python
5813 NOASSERTION
Speech recognition module for Python, supporting several engines and APIs, online and offline.
by jiaaro python
5557 MIT
Manipulate audio with a simple and easy high level interface
Trending New libraries in Speech
by coqui-ai python
4575 MPL-2.0
🐸💬 - a deep learning toolkit for Text-to-Speech, battle-tested in research and production
by TensorSpeech python
2140 Apache-2.0
:stuck_out_tongue_closed_eyes: TensorFlowTTS: Real-Time State-of-the-art Speech Synthesis for Tensorflow 2 (supported including English, French, Korean, Chinese, German and Easy to adapt for other languages)
by wenet-e2e c++
1943 Apache-2.0
Production First and Production Ready End-to-End Speech Recognition Toolkit
by boston-dynamics python
1812 NOASSERTION
Spot SDK repo
by coqui-ai c++
1158 MPL-2.0
🐸STT - The deep learning toolkit for Speech-to-Text. Training and deploying STT models has never been so easy.
by kakaobrain python
925 NOASSERTION
PORORO: Platform Of neuRal mOdels for natuRal language prOcessing
by facebookresearch python
803 NOASSERTION
We provide a PyTorch implementation of the paper Voice Separation with an Unknown Number of Multiple Speakers In which, we present a new method for separating a mixed audio sequence, in which multiple voices speak simultaneously. The new method employs gated neural networks that are trained to separate the voices at multiple processing steps, while maintaining the speaker in each output channel fixed. A different model is trained for every number of possible speakers, and the model with the largest number of speakers is employed to select the actual number of speakers in a given sample. Our method greatly outperforms the current state of the art, which, as we show, is not competitive for more than two speakers.
by as-ideas python
712 NOASSERTION
🤖💬 Transformer TTS: Implementation of a non-autoregressive Transformer based neural network for text to speech.
by mobvoi python
687 Apache-2.0
Production First and Production Ready End-to-End Speech Recognition Toolkit
Top Authors in Speech
1
16 Libraries
323
2
12 Libraries
739
3
12 Libraries
929
4
10 Libraries
5354
5
10 Libraries
72
6
10 Libraries
943
7
9 Libraries
305
8
9 Libraries
23389
9
9 Libraries
746
10
9 Libraries
597
1
16 Libraries
323
2
12 Libraries
739
3
12 Libraries
929
4
10 Libraries
5354
5
10 Libraries
72
6
10 Libraries
943
7
9 Libraries
305
8
9 Libraries
23389
9
9 Libraries
746
10
9 Libraries
597
Trending Kits in Speech
Real-time speech recognition in Python refers to the ability of a computer program to transcribe spoken words into written text in real-time. You can use a library like SpeechRecognition to recognize speech in real time in Python. It supports several various engines and APIs, such as Microsoft Bing Voice Recognition and Google Speech Recognition.
Real-time voice recognition in Python has a wide range of uses, including:
- Voice-controlled assistants: These virtual assistants, like Siri or Alexa, can be operated via voice commands.
- Speech-to-text transcription: This tool turns audible words into written text and is useful in professions including journalism, law, and medicine.
- Voice biometrics: This application uses a person's distinctive voice patterns to authenticate and identify them.
- Real-time language translation: This program helps people who speak various languages communicate more easily by translating spoken words from one language to another.
- Speech-based accessibility: Applications that assist people with disabilities, such as text-to-speech or speech-to-text for the visually impaired.
Here is how you can recognize speech in real-time in Python:
Fig 1: Preview of the output that you will get on running this code from your IDE
Code
In this solution, we use the Recognizer function of the Speech Recognition library
- Copy the code using the "Copy" button above, and paste it in a Python file in your IDE.
- Run the file. You will be prompted to speak something through your microphone
- The speech in real-time gets processed and displayed on screen
I hope you found this useful. I have added the link to dependent libraries, version information in the following sections.
I found this code snippet by searching for "speech recognition in python" in kandi. You can try any such use case!
Dependent Libraries
If you do not have Speech Recognition that is required to run this code, you can install it by clicking on the above link and copying the pip Install command from the Speech Recognition page in kandi.
You can search for any dependent library on kandi like SpeechRecognition.
Environment Tested
I tested this solution in the following versions. Be mindful of changes when working with other versions.
- The solution is created in Python3.9.
- The solution is tested on SpeechRecognition 3.8.1 and PyAudio 0.2.12 versions.
Using this solution, we are able to make blurred images using the OpenCV library in Python with simple steps. This process also facilities an easy to use, hassle free method to create a hands-on working version of code which would help us to recognize speech in real-time in Python.
Support
- For any support on kandi solution kits, please use the chat
- For further learning resources, visit the Open Weaver Community learning page.
Speech recognition is converting spoken words to text. It supports Google Speech Engine, Cloud Speech API, Bing Voice Recognition, and IBM Speech.
As we know Python is a multipurpose language that can be used for developing various applications including web apps. Python has many libraries dedicated to speech recognition, text-to-speech conversion, and text analysis.
In this article, I have listed some of the best Python Speech Recognition libraries with their key features. In this kit, we will go through some of the best Python Speech Recognition libraries like Real-Time-Voice-Cloning - 5 seconds to generate arbitrary speech; speech_recognition - Speech recognition module for Python, supporting several engines; wav2letter - Facebook AI Research's Automatic Speech Recognition Toolkit. Find the top 18 best Python Speech Recognition libraries in 2022.
Emotion Detection and Recognition is related to Sentiment Analysis. Sentiment Analysis aims to detect positive, neutral, or negative feelings from text.
Emotion Analysis aims to detect and recognize types of feelings through the expression of texts, such as joy, anger, fear, sadness.
In this kit, we build an AI based Speech Emotion Detector using open-source libraries. The concepts covered in the kit are:
- Voice-to-text transcription- The speech can be captured in real-time through the microphone or by uploading an audio file. It is then converted to text using state-of-the-art Opensource AI models from OpenAI Whisper library.
- Emotion detection- Emotion detection on the transcribed text is carried out using a finetuned XLM-RoBERTa model.
Whisper is a general-purpose speech recognition model released by OpenAI that can perform multilingual speech recognition as well as speech translation and language identification. Combined with an emotion detection model, this allows for detecting emotion directly from speech in multiple languages.
XLM-RoBERTa is a multilingual version of RoBERTa. It is pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. It can be finetuned to perform any specific task such as emotion classification, text completion etc. Combining these, the emotion detection model could be used to transcribe and detect different emotions to enable a data-driven analysis.
Libraries used in this solution
Development Environment
VSCode and Jupyter Notebook are used for development and debugging. Jupyter Notebook is a web based interactive environment often used for experiments, whereas VSCode is used to get a typical experience of IDE for developers.
Jupyter Notebook is used for our development.
Machine Learning
Machine learning libraries and frameworks here are helpful in providing state-of-the-art solutions using Machine learning
Kit Solution Source
APP Interface
Support
For any support, you can reach us at OpenWeaver Community Support
Tkinter provides a fast and easy way to create GUI applications in Python. It helps achieve compelling choices for building GUI applications in Python. We can able to achieve compelling choices for building GUI applications in Python.
We can able to make a computer understand human languages in Python language. For that, here we have a library called 'Speech Recognition'(open source module), which is used to listen to spoken words and identify the user's voice. You can also feed some programs for some devices to respond to these spoken words. For example, we can feed some commands to a robot like switching off the fan, turning on the AC, etc. Speech Recognition allows our computer's microphone to recognize the speech. For that, we'll have to install a library called 'PyAudio' which is used to recognize our voice and as well to play the sound. It is mainly used for binding the audio and recording the voice of the users on a variety of platforms.
Google Speech-to-Text is a well-known speech transcription API for the speech recognition library and there is one more API called DeepSpeech which is an open-source embedded speech-to-text engine designed to run in real-time devices. Speech recognition is the most significant part of AI, where AI is nothing but has the ability to mimic human behavior and learn the environment from it. The use of the Tkinter library is to create Graphical User Interface(GUI) and it includes all standard distributions in python. It has a variety of methods like "pack(), grid(), and place()".
Here is an example of how you can add text to speech GUI converter using Tkinter in Python:
Fig : Preview of the output that you will get on running this code from your IDE.
Code
In this solution we're using SpeechRecognition, PyAudio libraries and Tkinter libraries.
Instructions
Follow the steps carefully to get the output easily.
- Install SpeechRecognition and PyAudio on your IDE(Any of your favorite IDE).
- Open terminal and install the above mentioned libraries using the command given in 3 & 4 steps.
- Speech Recognition - pip install SpeechRecognition.
- PyAudio - pip install PyAudio.
- Copy the snippet using the 'copy' and paste it in your IDE.
- Run the file to generate the output.
I hope you found this useful. I have added the link to dependent libraries, version information in the following sections.
I found this code snippet by searching for 'integrate speech recognition and tkinter' in kandi. You can try any such use case!
Environment Tested
I tested this solution in the following versions. Be mindful of changes when working with other versions.
- The solution is created in PyCharm 2021.3.
- The solution is tested on Python 3.9.7.
- SpeechRecognition version-3.9.0.
- PyAudio version-0.2.13.
Using this solution, we are able to integrate speech recognition and tkinter with simple steps. This process also facilities an easy way to use, hassle-free method to create a hands-on working version of code which would help us to integrate speech recognition and tkinter.
Dependent Libraries
You can also search for any dependent libraries on kandi like 'SpeechRecognition','PyAudio' and 'tkinter'.
Support
- For any support on kandi solution kits, please use the chat
- For further learning resources, visit the Open Weaver Community learning page.
Real-time speech recognition in Python refers to the ability of a computer program to transcribe spoken words into written text in real time. You can use a library like SpeechRecognition to recognize speech in real-time in Python. It supports several various engines and APIs, such as Microsoft Bing Voice Recognition and Google Speech Recognition.
Real-time voice recognition in Python has a wide range of uses, including:
- Voice-controlled assistants: These virtual assistants, like Siri or Alexa, can be operated via voice commands.
- Speech-to-text transcription: This tool turns audible words into written text and is useful in professions including journalism, law, and medicine.
- Voice biometrics: This application uses a person's distinctive voice patterns to authenticate and identify them.
- Real-time language translation: This program helps people who speak various languages communicate more easily by translating spoken words from one language to another.
- Speech-based accessibility: Applications that assist people with disabilities, such as text-to-speech or speech-to-text for the visually impaired.
Here is how you can recognize speech in real time in Python:
Fig : Preview of the output that you will get on running this code from your IDE.
Code
In this solution we're using SpeechRecognition and PyAudio libraries.
Instructions
Follow the steps carefully to get the output easily.
- Install SpeechRecognition and PyAudio on your IDE(Any of your favorite IDE).
- Open terminal and install the above mentioned libraries using the command given in 3 & 4 steps.
- Speech Recognition - pip install SpeechRecognition.
- PyAudio - pip install PyAudio.
- Copy the snippet using the 'copy' and paste it in your IDE.
- Run the file to generate the output.
I hope you found this useful. I have added the link to dependent libraries, version information in the following sections.
I found this code snippet by searching for 'Speech recognition using python' in kandi. You can try any such use case!
Environment Tested
I tested this solution in the following versions. Be mindful of changes when working with other versions.
- The solution is created in PyCharm 2021.3.
- The solution is tested on Python 3.9.7.
- SpeechRecognition version-3.9.0.
- PyAudio version-0.2.13.
Using this solution, we are able to implement speech recognition using python with simple steps. This process also facilities an easy way to use, hassle-free method to create a hands-on working version of code which would help us to implement speech recognition using python.
Dependent Libraries
You can also search for any dependent libraries on kandi like 'SpeechRecognition' and 'PyAudio'.
Support
- For any support on kandi solution kits, please use the chat
- For further learning resources, visit the Open Weaver Community learning page.
DESCRIPTION
The provided Python code demonstrates a real-time speech-to-text translation system using the SpeechRecognition and Googletrans libraries. The purpose of this code is to convert spoken language into written text and then translate it into the desired target language.
The code consists of two main functions:
- speech_to_text(): This function utilizes the SpeechRecognition library to capture audio input from the default microphone. It then attempts to convert the speech to text using the Google Web Speech API. If successful, the recognized text is printed to the console. If there is an issue with speech recognition (e.g., when the input speech is not clear or recognizable), appropriate error messages are displayed.
- translate_text(text, target_language='ta'): In this function, the Googletrans library is used to translate the input text into the target language. By default, the target language is set to Tamil ('ta'), but you can specify any other language code as needed. The translated text is printed to the console, and it is also returned for further use.
The code demonstrates a practical implementation of real-time speech recognition and translation, which could have various applications, such as language learning, multilingual communication, and voice-controlled systems.
Note: Ensure that you have the required dependencies, such as SpeechRecognition and Googletrans, installed in your Python environment to run the code successfully.
DEPENDENT LIBRARIES
GITHUB REPOSITORY LINK
AkashS333/Real-Time-Speech-to-Text-Translation-in-Python-using-Speech-Recognition (github.com)
SOLUTION SOURCE SCREENSHOT
Here are some of the famous C++ Speech Recognition Libraries. Some use cases of C++ Speech Recognition Libraries include Automated Voice Assistants, Voice-Enabled Games, Voice-Enabled Mobile Apps, Voice-Controlled Robotics, Voice-Enabled Chatbots, Voice-controlled Home Automation, Language Translators, and Text-to-Speech Conversion.
C++ speech recognition libraries are sets of software tools and libraries written in the C++ programming language that are designed to enable developers to create applications that can recognize and respond to spoken commands. These libraries may include tools for voice recognition, speech synthesis, natural language processing, and other related tasks.
Let us have a look at some of the famous C++ Speech Recognition Libraries.
sphinx
- Highly configurable, allowing developers to customize it to their needs.
- Supports continuous speech recognition, allowing for continuous dictation.
- Wide range of applications, from voice-driven search engines to interactive home automation systems.
kaldi-gop
- Faster and more accurate speech recognition library than other C++ libraries.
- Enables users to quickly create and train acoustic models, making it ideal for real-time applications.
- Provides excellent integration with other open-source libraries, such as OpenFst and HTK.
AaltoASR
- Speech recognition accuracy is higher than many other C++ speech recognition libraries.
- Designed to be user-friendly, making it easy to integrate into existing applications.
- an be compiled for Windows, Linux and MacOS, making it suitable for a wide range of applications.
openEAR
- Tightly integrated with the CMU Sphinx speech recognizer.
- Supports multiple languages and can be customized for specific languages or dialects.
- Provides a robust set of tools for pre-processing and analyzing speech data.
htk
- Full-scale speech recognition system which can be used for both research and industrial applications.
- Wide range of signal processing, acoustic modeling, and language modeling algorithms.
- A graphical user interface that makes it more straightforward to work with.
Listener
- Compatible with multiple operating systems, including Windows, Mac, and Linux.
- Processes speech and audio commands quickly, allowing for fast responses and actions.
- High accuracy rate for speech recognition, making it a reliable and trustworthy library.
PocketSphinx
- Designed to be fast and responsive, ensuring quick recognition times for the user.
- Straightforward library, making it easy to use for developers with any level of experience.
- Can be freely modified and redistributed.
Trending Discussions on Speech
Enable use of images from the local library on Kubernetes
IndexError: tuple index out of range when I try to create an executable from a python script using auto-py-to-exe
Google Actions Builder stops execution when selecting a visual item from a List
How to use muti-language in 'gTTS' for single input line?
Assigning True/False if a token is present in a data-frame
speechSynthesis.getVoices (Web Speech API) doesn't show some of the locally installed voices
Combining Object Detection with Text to Speech Code
Yielding values from consecutive parallel parse functions via meta in Scrapy
Rails. Puma stops working when instantiating a client of Google Cloud Text-to-Speech (Windows)
R - Regular Expression to Extract Text Between Parentheses That Contain Keyword
QUESTION
Enable use of images from the local library on Kubernetes
Asked 2022-Mar-20 at 13:23I'm following a tutorial https://docs.openfaas.com/tutorials/first-python-function/,
currently, I have the right image
1$ docker images | grep hello-openfaas
2wm/hello-openfaas latest bd08d01ce09b 34 minutes ago 65.2MB
3$ faas-cli deploy -f ./hello-openfaas.yml
4Deploying: hello-openfaas.
5WARNING! You are not using an encrypted connection to the gateway, consider using HTTPS.
6
7Deployed. 202 Accepted.
8URL: http://IP:8099/function/hello-openfaas
9
there is a step that forewarns me to do some setup(My case is I'm using Kubernetes
and minikube
and don't want to push to a remote container registry, I should enable the use of images from the local library on Kubernetes.), I see the hints
1$ docker images | grep hello-openfaas
2wm/hello-openfaas latest bd08d01ce09b 34 minutes ago 65.2MB
3$ faas-cli deploy -f ./hello-openfaas.yml
4Deploying: hello-openfaas.
5WARNING! You are not using an encrypted connection to the gateway, consider using HTTPS.
6
7Deployed. 202 Accepted.
8URL: http://IP:8099/function/hello-openfaas
9see the helm chart for how to set the ImagePullPolicy
10
I'm not sure how to configure it correctly. the final result indicates I failed.
Unsurprisingly, I couldn't access the function service, I find some clues in https://docs.openfaas.com/deployment/troubleshooting/#openfaas-didnt-start which might help to diagnose the problem.
1$ docker images | grep hello-openfaas
2wm/hello-openfaas latest bd08d01ce09b 34 minutes ago 65.2MB
3$ faas-cli deploy -f ./hello-openfaas.yml
4Deploying: hello-openfaas.
5WARNING! You are not using an encrypted connection to the gateway, consider using HTTPS.
6
7Deployed. 202 Accepted.
8URL: http://IP:8099/function/hello-openfaas
9see the helm chart for how to set the ImagePullPolicy
10$ kubectl logs -n openfaas-fn deploy/hello-openfaas
11Error from server (BadRequest): container "hello-openfaas" in pod "hello-openfaas-558f99477f-wd697" is waiting to start: trying and failing to pull image
12
13$ kubectl describe -n openfaas-fn deploy/hello-openfaas
14Name: hello-openfaas
15Namespace: openfaas-fn
16CreationTimestamp: Wed, 16 Mar 2022 14:59:49 +0800
17Labels: faas_function=hello-openfaas
18Annotations: deployment.kubernetes.io/revision: 1
19 prometheus.io.scrape: false
20Selector: faas_function=hello-openfaas
21Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable
22StrategyType: RollingUpdate
23MinReadySeconds: 0
24RollingUpdateStrategy: 0 max unavailable, 1 max surge
25Pod Template:
26 Labels: faas_function=hello-openfaas
27 Annotations: prometheus.io.scrape: false
28 Containers:
29 hello-openfaas:
30 Image: wm/hello-openfaas:latest
31 Port: 8080/TCP
32 Host Port: 0/TCP
33 Liveness: http-get http://:8080/_/health delay=2s timeout=1s period=2s #success=1 #failure=3
34 Readiness: http-get http://:8080/_/health delay=2s timeout=1s period=2s #success=1 #failure=3
35 Environment:
36 fprocess: python3 index.py
37 Mounts: <none>
38 Volumes: <none>
39Conditions:
40 Type Status Reason
41 ---- ------ ------
42 Available False MinimumReplicasUnavailable
43 Progressing False ProgressDeadlineExceeded
44OldReplicaSets: <none>
45NewReplicaSet: hello-openfaas-558f99477f (1/1 replicas created)
46Events:
47 Type Reason Age From Message
48 ---- ------ ---- ---- -------
49 Normal ScalingReplicaSet 29m deployment-controller Scaled up replica set hello-openfaas-558f99477f to 1
50
hello-openfaas.yml
1$ docker images | grep hello-openfaas
2wm/hello-openfaas latest bd08d01ce09b 34 minutes ago 65.2MB
3$ faas-cli deploy -f ./hello-openfaas.yml
4Deploying: hello-openfaas.
5WARNING! You are not using an encrypted connection to the gateway, consider using HTTPS.
6
7Deployed. 202 Accepted.
8URL: http://IP:8099/function/hello-openfaas
9see the helm chart for how to set the ImagePullPolicy
10$ kubectl logs -n openfaas-fn deploy/hello-openfaas
11Error from server (BadRequest): container "hello-openfaas" in pod "hello-openfaas-558f99477f-wd697" is waiting to start: trying and failing to pull image
12
13$ kubectl describe -n openfaas-fn deploy/hello-openfaas
14Name: hello-openfaas
15Namespace: openfaas-fn
16CreationTimestamp: Wed, 16 Mar 2022 14:59:49 +0800
17Labels: faas_function=hello-openfaas
18Annotations: deployment.kubernetes.io/revision: 1
19 prometheus.io.scrape: false
20Selector: faas_function=hello-openfaas
21Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable
22StrategyType: RollingUpdate
23MinReadySeconds: 0
24RollingUpdateStrategy: 0 max unavailable, 1 max surge
25Pod Template:
26 Labels: faas_function=hello-openfaas
27 Annotations: prometheus.io.scrape: false
28 Containers:
29 hello-openfaas:
30 Image: wm/hello-openfaas:latest
31 Port: 8080/TCP
32 Host Port: 0/TCP
33 Liveness: http-get http://:8080/_/health delay=2s timeout=1s period=2s #success=1 #failure=3
34 Readiness: http-get http://:8080/_/health delay=2s timeout=1s period=2s #success=1 #failure=3
35 Environment:
36 fprocess: python3 index.py
37 Mounts: <none>
38 Volumes: <none>
39Conditions:
40 Type Status Reason
41 ---- ------ ------
42 Available False MinimumReplicasUnavailable
43 Progressing False ProgressDeadlineExceeded
44OldReplicaSets: <none>
45NewReplicaSet: hello-openfaas-558f99477f (1/1 replicas created)
46Events:
47 Type Reason Age From Message
48 ---- ------ ---- ---- -------
49 Normal ScalingReplicaSet 29m deployment-controller Scaled up replica set hello-openfaas-558f99477f to 1
50version: 1.0
51provider:
52 name: openfaas
53 gateway: http://IP:8099
54functions:
55 hello-openfaas:
56 lang: python3
57 handler: ./hello-openfaas
58 image: wm/hello-openfaas:latest
59 imagePullPolicy: Never
60
I create a new project hello-openfaas2
to reproduce this error
1$ docker images | grep hello-openfaas
2wm/hello-openfaas latest bd08d01ce09b 34 minutes ago 65.2MB
3$ faas-cli deploy -f ./hello-openfaas.yml
4Deploying: hello-openfaas.
5WARNING! You are not using an encrypted connection to the gateway, consider using HTTPS.
6
7Deployed. 202 Accepted.
8URL: http://IP:8099/function/hello-openfaas
9see the helm chart for how to set the ImagePullPolicy
10$ kubectl logs -n openfaas-fn deploy/hello-openfaas
11Error from server (BadRequest): container "hello-openfaas" in pod "hello-openfaas-558f99477f-wd697" is waiting to start: trying and failing to pull image
12
13$ kubectl describe -n openfaas-fn deploy/hello-openfaas
14Name: hello-openfaas
15Namespace: openfaas-fn
16CreationTimestamp: Wed, 16 Mar 2022 14:59:49 +0800
17Labels: faas_function=hello-openfaas
18Annotations: deployment.kubernetes.io/revision: 1
19 prometheus.io.scrape: false
20Selector: faas_function=hello-openfaas
21Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable
22StrategyType: RollingUpdate
23MinReadySeconds: 0
24RollingUpdateStrategy: 0 max unavailable, 1 max surge
25Pod Template:
26 Labels: faas_function=hello-openfaas
27 Annotations: prometheus.io.scrape: false
28 Containers:
29 hello-openfaas:
30 Image: wm/hello-openfaas:latest
31 Port: 8080/TCP
32 Host Port: 0/TCP
33 Liveness: http-get http://:8080/_/health delay=2s timeout=1s period=2s #success=1 #failure=3
34 Readiness: http-get http://:8080/_/health delay=2s timeout=1s period=2s #success=1 #failure=3
35 Environment:
36 fprocess: python3 index.py
37 Mounts: <none>
38 Volumes: <none>
39Conditions:
40 Type Status Reason
41 ---- ------ ------
42 Available False MinimumReplicasUnavailable
43 Progressing False ProgressDeadlineExceeded
44OldReplicaSets: <none>
45NewReplicaSet: hello-openfaas-558f99477f (1/1 replicas created)
46Events:
47 Type Reason Age From Message
48 ---- ------ ---- ---- -------
49 Normal ScalingReplicaSet 29m deployment-controller Scaled up replica set hello-openfaas-558f99477f to 1
50version: 1.0
51provider:
52 name: openfaas
53 gateway: http://IP:8099
54functions:
55 hello-openfaas:
56 lang: python3
57 handler: ./hello-openfaas
58 image: wm/hello-openfaas:latest
59 imagePullPolicy: Never
60$ faas-cli new --lang python3 hello-openfaas2 --prefix="wm"
61Folder: hello-openfaas2 created.
62# I add `imagePullPolicy: Never` to `hello-openfaas2.yml`
63$ faas-cli build -f ./hello-openfaas2.yml
64$ faas-cli deploy -f ./hello-openfaas2.yml
65Deploying: hello-openfaas2.
66WARNING! You are not using an encrypted connection to the gateway, consider using HTTPS.
67
68Deployed. 202 Accepted.
69URL: http://192.168.1.3:8099/function/hello-openfaas2
70
71
72$ kubectl logs -n openfaas-fn deploy/hello-openfaas2
73Error from server (BadRequest): container "hello-openfaas2" in pod "hello-openfaas2-7c67488865-7d7vm" is waiting to start: image can't be pulled
74
75$ kubectl get pods --all-namespaces
76NAMESPACE NAME READY STATUS RESTARTS AGE
77kube-system coredns-64897985d-kp7vf 1/1 Running 0 47h
78...
79openfaas-fn env-6c79f7b946-bzbtm 1/1 Running 0 4h28m
80openfaas-fn figlet-54db496f88-957xl 1/1 Running 0 18h
81openfaas-fn hello-openfaas-547857b9d6-z277c 0/1 ImagePullBackOff 0 127m
82openfaas-fn hello-openfaas-7b6946b4f9-hcvq4 0/1 ImagePullBackOff 0 165m
83openfaas-fn hello-openfaas2-7c67488865-qmrkl 0/1 ImagePullBackOff 0 13m
84openfaas-fn hello-openfaas3-65847b8b67-b94kd 0/1 ImagePullBackOff 0 97m
85openfaas-fn hello-python-554b464498-zxcdv 0/1 ErrImagePull 0 3h23m
86openfaas-fn hello-python-8698bc68bd-62gh9 0/1 ImagePullBackOff 0 3h25m
87
from https://docs.openfaas.com/reference/yaml/, I know I put the imagePullPolicy
in the wrong place, there is no such keyword in its schema.
I also tried eval $(minikube docker-env
and still get the same error.
I've a feeling that faas-cli deploy
can be replace by helm
, they all mean to run the image(whether from remote or local) in Kubernetes cluster, then I can use helm chart
to setup the pullPolicy
there. Even though the detail is not still clear to me, This discovery inspires me.
So far, after eval $(minikube docker-env)
1$ docker images | grep hello-openfaas
2wm/hello-openfaas latest bd08d01ce09b 34 minutes ago 65.2MB
3$ faas-cli deploy -f ./hello-openfaas.yml
4Deploying: hello-openfaas.
5WARNING! You are not using an encrypted connection to the gateway, consider using HTTPS.
6
7Deployed. 202 Accepted.
8URL: http://IP:8099/function/hello-openfaas
9see the helm chart for how to set the ImagePullPolicy
10$ kubectl logs -n openfaas-fn deploy/hello-openfaas
11Error from server (BadRequest): container "hello-openfaas" in pod "hello-openfaas-558f99477f-wd697" is waiting to start: trying and failing to pull image
12
13$ kubectl describe -n openfaas-fn deploy/hello-openfaas
14Name: hello-openfaas
15Namespace: openfaas-fn
16CreationTimestamp: Wed, 16 Mar 2022 14:59:49 +0800
17Labels: faas_function=hello-openfaas
18Annotations: deployment.kubernetes.io/revision: 1
19 prometheus.io.scrape: false
20Selector: faas_function=hello-openfaas
21Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable
22StrategyType: RollingUpdate
23MinReadySeconds: 0
24RollingUpdateStrategy: 0 max unavailable, 1 max surge
25Pod Template:
26 Labels: faas_function=hello-openfaas
27 Annotations: prometheus.io.scrape: false
28 Containers:
29 hello-openfaas:
30 Image: wm/hello-openfaas:latest
31 Port: 8080/TCP
32 Host Port: 0/TCP
33 Liveness: http-get http://:8080/_/health delay=2s timeout=1s period=2s #success=1 #failure=3
34 Readiness: http-get http://:8080/_/health delay=2s timeout=1s period=2s #success=1 #failure=3
35 Environment:
36 fprocess: python3 index.py
37 Mounts: <none>
38 Volumes: <none>
39Conditions:
40 Type Status Reason
41 ---- ------ ------
42 Available False MinimumReplicasUnavailable
43 Progressing False ProgressDeadlineExceeded
44OldReplicaSets: <none>
45NewReplicaSet: hello-openfaas-558f99477f (1/1 replicas created)
46Events:
47 Type Reason Age From Message
48 ---- ------ ---- ---- -------
49 Normal ScalingReplicaSet 29m deployment-controller Scaled up replica set hello-openfaas-558f99477f to 1
50version: 1.0
51provider:
52 name: openfaas
53 gateway: http://IP:8099
54functions:
55 hello-openfaas:
56 lang: python3
57 handler: ./hello-openfaas
58 image: wm/hello-openfaas:latest
59 imagePullPolicy: Never
60$ faas-cli new --lang python3 hello-openfaas2 --prefix="wm"
61Folder: hello-openfaas2 created.
62# I add `imagePullPolicy: Never` to `hello-openfaas2.yml`
63$ faas-cli build -f ./hello-openfaas2.yml
64$ faas-cli deploy -f ./hello-openfaas2.yml
65Deploying: hello-openfaas2.
66WARNING! You are not using an encrypted connection to the gateway, consider using HTTPS.
67
68Deployed. 202 Accepted.
69URL: http://192.168.1.3:8099/function/hello-openfaas2
70
71
72$ kubectl logs -n openfaas-fn deploy/hello-openfaas2
73Error from server (BadRequest): container "hello-openfaas2" in pod "hello-openfaas2-7c67488865-7d7vm" is waiting to start: image can't be pulled
74
75$ kubectl get pods --all-namespaces
76NAMESPACE NAME READY STATUS RESTARTS AGE
77kube-system coredns-64897985d-kp7vf 1/1 Running 0 47h
78...
79openfaas-fn env-6c79f7b946-bzbtm 1/1 Running 0 4h28m
80openfaas-fn figlet-54db496f88-957xl 1/1 Running 0 18h
81openfaas-fn hello-openfaas-547857b9d6-z277c 0/1 ImagePullBackOff 0 127m
82openfaas-fn hello-openfaas-7b6946b4f9-hcvq4 0/1 ImagePullBackOff 0 165m
83openfaas-fn hello-openfaas2-7c67488865-qmrkl 0/1 ImagePullBackOff 0 13m
84openfaas-fn hello-openfaas3-65847b8b67-b94kd 0/1 ImagePullBackOff 0 97m
85openfaas-fn hello-python-554b464498-zxcdv 0/1 ErrImagePull 0 3h23m
86openfaas-fn hello-python-8698bc68bd-62gh9 0/1 ImagePullBackOff 0 3h25m
87$ docker images
88REPOSITORY TAG IMAGE ID CREATED SIZE
89wm/hello-openfaas2 0.1 03c21bd96d5e About an hour ago 65.2MB
90python 3-alpine 69fba17b9bae 12 days ago 48.6MB
91ghcr.io/openfaas/figlet latest ca5eef0de441 2 weeks ago 14.8MB
92ghcr.io/openfaas/alpine latest 35f3d4be6bb8 2 weeks ago 14.2MB
93ghcr.io/openfaas/faas-netes 0.14.2 524b510505ec 3 weeks ago 77.3MB
94k8s.gcr.io/kube-apiserver v1.23.3 f40be0088a83 7 weeks ago 135MB
95k8s.gcr.io/kube-controller-manager v1.23.3 b07520cd7ab7 7 weeks ago 125MB
96k8s.gcr.io/kube-scheduler v1.23.3 99a3486be4f2 7 weeks ago 53.5MB
97k8s.gcr.io/kube-proxy v1.23.3 9b7cc9982109 7 weeks ago 112MB
98ghcr.io/openfaas/gateway 0.21.3 ab4851262cd1 7 weeks ago 30.6MB
99ghcr.io/openfaas/basic-auth 0.21.3 16e7168a17a3 7 weeks ago 14.3MB
100k8s.gcr.io/etcd 3.5.1-0 25f8c7f3da61 4 months ago 293MB
101ghcr.io/openfaas/classic-watchdog 0.2.0 6f97aa96da81 4 months ago 8.18MB
102k8s.gcr.io/coredns/coredns v1.8.6 a4ca41631cc7 5 months ago 46.8MB
103k8s.gcr.io/pause 3.6 6270bb605e12 6 months ago 683kB
104ghcr.io/openfaas/queue-worker 0.12.2 56e7216201bc 7 months ago 7.97MB
105kubernetesui/dashboard v2.3.1 e1482a24335a 9 months ago 220MB
106kubernetesui/metrics-scraper v1.0.7 7801cfc6d5c0 9 months ago 34.4MB
107nats-streaming 0.22.0 12f2d32e0c9a 9 months ago 19.8MB
108gcr.io/k8s-minikube/storage-provisioner v5 6e38f40d628d 11 months ago 31.5MB
109functions/markdown-render latest 93b5da182216 2 years ago 24.6MB
110functions/hubstats latest 01affa91e9e4 2 years ago 29.3MB
111functions/nodeinfo latest 2fe8a87bf79c 2 years ago 71.4MB
112functions/alpine latest 46c6f6d74471 2 years ago 21.5MB
113prom/prometheus v2.11.0 b97ed892eb23 2 years ago 126MB
114prom/alertmanager v0.18.0 ce3c87f17369 2 years ago 51.9MB
115alexellis2/openfaas-colorization 0.4.1 d36b67b1b5c1 2 years ago 1.84GB
116rorpage/text-to-speech latest 5dc20810eb54 2 years ago 86.9MB
117stefanprodan/faas-grafana 4.6.3 2a4bd9caea50 4 years ago 284MB
118
119$ kubectl get pods --all-namespaces
120NAMESPACE NAME READY STATUS RESTARTS AGE
121kube-system coredns-64897985d-kp7vf 1/1 Running 0 6d
122kube-system etcd-minikube 1/1 Running 0 6d
123kube-system kube-apiserver-minikube 1/1 Running 0 6d
124kube-system kube-controller-manager-minikube 1/1 Running 0 6d
125kube-system kube-proxy-5m8lr 1/1 Running 0 6d
126kube-system kube-scheduler-minikube 1/1 Running 0 6d
127kube-system storage-provisioner 1/1 Running 1 (6d ago) 6d
128kubernetes-dashboard dashboard-metrics-scraper-58549894f-97tsv 1/1 Running 0 5d7h
129kubernetes-dashboard kubernetes-dashboard-ccd587f44-lkwcx 1/1 Running 0 5d7h
130openfaas-fn base64-6bdbcdb64c-djz8f 1/1 Running 0 5d1h
131openfaas-fn colorise-85c74c686b-2fz66 1/1 Running 0 4d5h
132openfaas-fn echoit-5d7df6684c-k6ljn 1/1 Running 0 5d1h
133openfaas-fn env-6c79f7b946-bzbtm 1/1 Running 0 4d5h
134openfaas-fn figlet-54db496f88-957xl 1/1 Running 0 4d19h
135openfaas-fn hello-openfaas-547857b9d6-z277c 0/1 ImagePullBackOff 0 4d3h
136openfaas-fn hello-openfaas-7b6946b4f9-hcvq4 0/1 ImagePullBackOff 0 4d3h
137openfaas-fn hello-openfaas2-5c6f6cb5d9-24hkz 0/1 ImagePullBackOff 0 9m22s
138openfaas-fn hello-openfaas2-8957bb47b-7cgjg 0/1 ImagePullBackOff 0 2d22h
139openfaas-fn hello-openfaas3-65847b8b67-b94kd 0/1 ImagePullBackOff 0 4d2h
140openfaas-fn hello-python-6d6976845f-cwsln 0/1 ImagePullBackOff 0 3d19h
141openfaas-fn hello-python-b577cb8dc-64wf5 0/1 ImagePullBackOff 0 3d9h
142openfaas-fn hubstats-b6cd4dccc-z8tvl 1/1 Running 0 5d1h
143openfaas-fn markdown-68f69f47c8-w5m47 1/1 Running 0 5d1h
144openfaas-fn nodeinfo-d48cbbfcc-hfj79 1/1 Running 0 5d1h
145openfaas-fn openfaas2-fun 1/1 Running 0 15s
146openfaas-fn text-to-speech-74ffcdfd7-997t4 0/1 CrashLoopBackOff 2235 (3s ago) 4d5h
147openfaas-fn wordcount-6489865566-cvfzr 1/1 Running 0 5d1h
148openfaas alertmanager-88449c789-fq2rg 1/1 Running 0 3d1h
149openfaas basic-auth-plugin-75fd7d69c5-zw4jh 1/1 Running 0 3d2h
150openfaas gateway-5c4bb7c5d7-n8h27 2/2 Running 0 3d2h
151openfaas grafana 1/1 Running 0 4d8h
152openfaas nats-647b476664-hkr7p 1/1 Running 0 3d2h
153openfaas prometheus-687648749f-tl8jp 1/1 Running 0 3d1h
154openfaas queue-worker-7777ffd7f6-htx6t 1/1 Running 0 3d2h
155
156
157$ kubectl get -o yaml -n openfaas-fn deploy/hello-openfaas2
158apiVersion: apps/v1
159kind: Deployment
160metadata:
161 annotations:
162 deployment.kubernetes.io/revision: "6"
163 prometheus.io.scrape: "false"
164 creationTimestamp: "2022-03-17T12:47:35Z"
165 generation: 6
166 labels:
167 faas_function: hello-openfaas2
168 name: hello-openfaas2
169 namespace: openfaas-fn
170 resourceVersion: "400833"
171 uid: 9c4e9d26-23af-4f93-8538-4e2d96f0d7e0
172spec:
173 progressDeadlineSeconds: 600
174 replicas: 1
175 revisionHistoryLimit: 10
176 selector:
177 matchLabels:
178 faas_function: hello-openfaas2
179 strategy:
180 rollingUpdate:
181 maxSurge: 1
182 maxUnavailable: 0
183 type: RollingUpdate
184 template:
185 metadata:
186 annotations:
187 prometheus.io.scrape: "false"
188 creationTimestamp: null
189 labels:
190 faas_function: hello-openfaas2
191 uid: "969512830"
192 name: hello-openfaas2
193 spec:
194 containers:
195 - env:
196 - name: fprocess
197 value: python3 index.py
198 image: wm/hello-openfaas2:0.1
199 imagePullPolicy: Always
200 livenessProbe:
201 failureThreshold: 3
202 httpGet:
203 path: /_/health
204 port: 8080
205 scheme: HTTP
206 initialDelaySeconds: 2
207 periodSeconds: 2
208 successThreshold: 1
209 timeoutSeconds: 1
210 name: hello-openfaas2
211 ports:
212 - containerPort: 8080
213 name: http
214 protocol: TCP
215 readinessProbe:
216 failureThreshold: 3
217 httpGet:
218 path: /_/health
219 port: 8080
220 scheme: HTTP
221 initialDelaySeconds: 2
222 periodSeconds: 2
223 successThreshold: 1
224 timeoutSeconds: 1
225 resources: {}
226 securityContext:
227 allowPrivilegeEscalation: false
228 readOnlyRootFilesystem: false
229 terminationMessagePath: /dev/termination-log
230 terminationMessagePolicy: File
231 dnsPolicy: ClusterFirst
232 enableServiceLinks: false
233 restartPolicy: Always
234 schedulerName: default-scheduler
235 securityContext: {}
236 terminationGracePeriodSeconds: 30
237status:
238 conditions:
239 - lastTransitionTime: "2022-03-17T12:47:35Z"
240 lastUpdateTime: "2022-03-17T12:47:35Z"
241 message: Deployment does not have minimum availability.
242 reason: MinimumReplicasUnavailable
243 status: "False"
244 type: Available
245 - lastTransitionTime: "2022-03-20T12:16:56Z"
246 lastUpdateTime: "2022-03-20T12:16:56Z"
247 message: ReplicaSet "hello-openfaas2-5d6c7c7fb4" has timed out progressing.
248 reason: ProgressDeadlineExceeded
249 status: "False"
250 type: Progressing
251 observedGeneration: 6
252 replicas: 2
253 unavailableReplicas: 2
254 updatedReplicas: 1
255
In one shell,
1$ docker images | grep hello-openfaas
2wm/hello-openfaas latest bd08d01ce09b 34 minutes ago 65.2MB
3$ faas-cli deploy -f ./hello-openfaas.yml
4Deploying: hello-openfaas.
5WARNING! You are not using an encrypted connection to the gateway, consider using HTTPS.
6
7Deployed. 202 Accepted.
8URL: http://IP:8099/function/hello-openfaas
9see the helm chart for how to set the ImagePullPolicy
10$ kubectl logs -n openfaas-fn deploy/hello-openfaas
11Error from server (BadRequest): container "hello-openfaas" in pod "hello-openfaas-558f99477f-wd697" is waiting to start: trying and failing to pull image
12
13$ kubectl describe -n openfaas-fn deploy/hello-openfaas
14Name: hello-openfaas
15Namespace: openfaas-fn
16CreationTimestamp: Wed, 16 Mar 2022 14:59:49 +0800
17Labels: faas_function=hello-openfaas
18Annotations: deployment.kubernetes.io/revision: 1
19 prometheus.io.scrape: false
20Selector: faas_function=hello-openfaas
21Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable
22StrategyType: RollingUpdate
23MinReadySeconds: 0
24RollingUpdateStrategy: 0 max unavailable, 1 max surge
25Pod Template:
26 Labels: faas_function=hello-openfaas
27 Annotations: prometheus.io.scrape: false
28 Containers:
29 hello-openfaas:
30 Image: wm/hello-openfaas:latest
31 Port: 8080/TCP
32 Host Port: 0/TCP
33 Liveness: http-get http://:8080/_/health delay=2s timeout=1s period=2s #success=1 #failure=3
34 Readiness: http-get http://:8080/_/health delay=2s timeout=1s period=2s #success=1 #failure=3
35 Environment:
36 fprocess: python3 index.py
37 Mounts: <none>
38 Volumes: <none>
39Conditions:
40 Type Status Reason
41 ---- ------ ------
42 Available False MinimumReplicasUnavailable
43 Progressing False ProgressDeadlineExceeded
44OldReplicaSets: <none>
45NewReplicaSet: hello-openfaas-558f99477f (1/1 replicas created)
46Events:
47 Type Reason Age From Message
48 ---- ------ ---- ---- -------
49 Normal ScalingReplicaSet 29m deployment-controller Scaled up replica set hello-openfaas-558f99477f to 1
50version: 1.0
51provider:
52 name: openfaas
53 gateway: http://IP:8099
54functions:
55 hello-openfaas:
56 lang: python3
57 handler: ./hello-openfaas
58 image: wm/hello-openfaas:latest
59 imagePullPolicy: Never
60$ faas-cli new --lang python3 hello-openfaas2 --prefix="wm"
61Folder: hello-openfaas2 created.
62# I add `imagePullPolicy: Never` to `hello-openfaas2.yml`
63$ faas-cli build -f ./hello-openfaas2.yml
64$ faas-cli deploy -f ./hello-openfaas2.yml
65Deploying: hello-openfaas2.
66WARNING! You are not using an encrypted connection to the gateway, consider using HTTPS.
67
68Deployed. 202 Accepted.
69URL: http://192.168.1.3:8099/function/hello-openfaas2
70
71
72$ kubectl logs -n openfaas-fn deploy/hello-openfaas2
73Error from server (BadRequest): container "hello-openfaas2" in pod "hello-openfaas2-7c67488865-7d7vm" is waiting to start: image can't be pulled
74
75$ kubectl get pods --all-namespaces
76NAMESPACE NAME READY STATUS RESTARTS AGE
77kube-system coredns-64897985d-kp7vf 1/1 Running 0 47h
78...
79openfaas-fn env-6c79f7b946-bzbtm 1/1 Running 0 4h28m
80openfaas-fn figlet-54db496f88-957xl 1/1 Running 0 18h
81openfaas-fn hello-openfaas-547857b9d6-z277c 0/1 ImagePullBackOff 0 127m
82openfaas-fn hello-openfaas-7b6946b4f9-hcvq4 0/1 ImagePullBackOff 0 165m
83openfaas-fn hello-openfaas2-7c67488865-qmrkl 0/1 ImagePullBackOff 0 13m
84openfaas-fn hello-openfaas3-65847b8b67-b94kd 0/1 ImagePullBackOff 0 97m
85openfaas-fn hello-python-554b464498-zxcdv 0/1 ErrImagePull 0 3h23m
86openfaas-fn hello-python-8698bc68bd-62gh9 0/1 ImagePullBackOff 0 3h25m
87$ docker images
88REPOSITORY TAG IMAGE ID CREATED SIZE
89wm/hello-openfaas2 0.1 03c21bd96d5e About an hour ago 65.2MB
90python 3-alpine 69fba17b9bae 12 days ago 48.6MB
91ghcr.io/openfaas/figlet latest ca5eef0de441 2 weeks ago 14.8MB
92ghcr.io/openfaas/alpine latest 35f3d4be6bb8 2 weeks ago 14.2MB
93ghcr.io/openfaas/faas-netes 0.14.2 524b510505ec 3 weeks ago 77.3MB
94k8s.gcr.io/kube-apiserver v1.23.3 f40be0088a83 7 weeks ago 135MB
95k8s.gcr.io/kube-controller-manager v1.23.3 b07520cd7ab7 7 weeks ago 125MB
96k8s.gcr.io/kube-scheduler v1.23.3 99a3486be4f2 7 weeks ago 53.5MB
97k8s.gcr.io/kube-proxy v1.23.3 9b7cc9982109 7 weeks ago 112MB
98ghcr.io/openfaas/gateway 0.21.3 ab4851262cd1 7 weeks ago 30.6MB
99ghcr.io/openfaas/basic-auth 0.21.3 16e7168a17a3 7 weeks ago 14.3MB
100k8s.gcr.io/etcd 3.5.1-0 25f8c7f3da61 4 months ago 293MB
101ghcr.io/openfaas/classic-watchdog 0.2.0 6f97aa96da81 4 months ago 8.18MB
102k8s.gcr.io/coredns/coredns v1.8.6 a4ca41631cc7 5 months ago 46.8MB
103k8s.gcr.io/pause 3.6 6270bb605e12 6 months ago 683kB
104ghcr.io/openfaas/queue-worker 0.12.2 56e7216201bc 7 months ago 7.97MB
105kubernetesui/dashboard v2.3.1 e1482a24335a 9 months ago 220MB
106kubernetesui/metrics-scraper v1.0.7 7801cfc6d5c0 9 months ago 34.4MB
107nats-streaming 0.22.0 12f2d32e0c9a 9 months ago 19.8MB
108gcr.io/k8s-minikube/storage-provisioner v5 6e38f40d628d 11 months ago 31.5MB
109functions/markdown-render latest 93b5da182216 2 years ago 24.6MB
110functions/hubstats latest 01affa91e9e4 2 years ago 29.3MB
111functions/nodeinfo latest 2fe8a87bf79c 2 years ago 71.4MB
112functions/alpine latest 46c6f6d74471 2 years ago 21.5MB
113prom/prometheus v2.11.0 b97ed892eb23 2 years ago 126MB
114prom/alertmanager v0.18.0 ce3c87f17369 2 years ago 51.9MB
115alexellis2/openfaas-colorization 0.4.1 d36b67b1b5c1 2 years ago 1.84GB
116rorpage/text-to-speech latest 5dc20810eb54 2 years ago 86.9MB
117stefanprodan/faas-grafana 4.6.3 2a4bd9caea50 4 years ago 284MB
118
119$ kubectl get pods --all-namespaces
120NAMESPACE NAME READY STATUS RESTARTS AGE
121kube-system coredns-64897985d-kp7vf 1/1 Running 0 6d
122kube-system etcd-minikube 1/1 Running 0 6d
123kube-system kube-apiserver-minikube 1/1 Running 0 6d
124kube-system kube-controller-manager-minikube 1/1 Running 0 6d
125kube-system kube-proxy-5m8lr 1/1 Running 0 6d
126kube-system kube-scheduler-minikube 1/1 Running 0 6d
127kube-system storage-provisioner 1/1 Running 1 (6d ago) 6d
128kubernetes-dashboard dashboard-metrics-scraper-58549894f-97tsv 1/1 Running 0 5d7h
129kubernetes-dashboard kubernetes-dashboard-ccd587f44-lkwcx 1/1 Running 0 5d7h
130openfaas-fn base64-6bdbcdb64c-djz8f 1/1 Running 0 5d1h
131openfaas-fn colorise-85c74c686b-2fz66 1/1 Running 0 4d5h
132openfaas-fn echoit-5d7df6684c-k6ljn 1/1 Running 0 5d1h
133openfaas-fn env-6c79f7b946-bzbtm 1/1 Running 0 4d5h
134openfaas-fn figlet-54db496f88-957xl 1/1 Running 0 4d19h
135openfaas-fn hello-openfaas-547857b9d6-z277c 0/1 ImagePullBackOff 0 4d3h
136openfaas-fn hello-openfaas-7b6946b4f9-hcvq4 0/1 ImagePullBackOff 0 4d3h
137openfaas-fn hello-openfaas2-5c6f6cb5d9-24hkz 0/1 ImagePullBackOff 0 9m22s
138openfaas-fn hello-openfaas2-8957bb47b-7cgjg 0/1 ImagePullBackOff 0 2d22h
139openfaas-fn hello-openfaas3-65847b8b67-b94kd 0/1 ImagePullBackOff 0 4d2h
140openfaas-fn hello-python-6d6976845f-cwsln 0/1 ImagePullBackOff 0 3d19h
141openfaas-fn hello-python-b577cb8dc-64wf5 0/1 ImagePullBackOff 0 3d9h
142openfaas-fn hubstats-b6cd4dccc-z8tvl 1/1 Running 0 5d1h
143openfaas-fn markdown-68f69f47c8-w5m47 1/1 Running 0 5d1h
144openfaas-fn nodeinfo-d48cbbfcc-hfj79 1/1 Running 0 5d1h
145openfaas-fn openfaas2-fun 1/1 Running 0 15s
146openfaas-fn text-to-speech-74ffcdfd7-997t4 0/1 CrashLoopBackOff 2235 (3s ago) 4d5h
147openfaas-fn wordcount-6489865566-cvfzr 1/1 Running 0 5d1h
148openfaas alertmanager-88449c789-fq2rg 1/1 Running 0 3d1h
149openfaas basic-auth-plugin-75fd7d69c5-zw4jh 1/1 Running 0 3d2h
150openfaas gateway-5c4bb7c5d7-n8h27 2/2 Running 0 3d2h
151openfaas grafana 1/1 Running 0 4d8h
152openfaas nats-647b476664-hkr7p 1/1 Running 0 3d2h
153openfaas prometheus-687648749f-tl8jp 1/1 Running 0 3d1h
154openfaas queue-worker-7777ffd7f6-htx6t 1/1 Running 0 3d2h
155
156
157$ kubectl get -o yaml -n openfaas-fn deploy/hello-openfaas2
158apiVersion: apps/v1
159kind: Deployment
160metadata:
161 annotations:
162 deployment.kubernetes.io/revision: "6"
163 prometheus.io.scrape: "false"
164 creationTimestamp: "2022-03-17T12:47:35Z"
165 generation: 6
166 labels:
167 faas_function: hello-openfaas2
168 name: hello-openfaas2
169 namespace: openfaas-fn
170 resourceVersion: "400833"
171 uid: 9c4e9d26-23af-4f93-8538-4e2d96f0d7e0
172spec:
173 progressDeadlineSeconds: 600
174 replicas: 1
175 revisionHistoryLimit: 10
176 selector:
177 matchLabels:
178 faas_function: hello-openfaas2
179 strategy:
180 rollingUpdate:
181 maxSurge: 1
182 maxUnavailable: 0
183 type: RollingUpdate
184 template:
185 metadata:
186 annotations:
187 prometheus.io.scrape: "false"
188 creationTimestamp: null
189 labels:
190 faas_function: hello-openfaas2
191 uid: "969512830"
192 name: hello-openfaas2
193 spec:
194 containers:
195 - env:
196 - name: fprocess
197 value: python3 index.py
198 image: wm/hello-openfaas2:0.1
199 imagePullPolicy: Always
200 livenessProbe:
201 failureThreshold: 3
202 httpGet:
203 path: /_/health
204 port: 8080
205 scheme: HTTP
206 initialDelaySeconds: 2
207 periodSeconds: 2
208 successThreshold: 1
209 timeoutSeconds: 1
210 name: hello-openfaas2
211 ports:
212 - containerPort: 8080
213 name: http
214 protocol: TCP
215 readinessProbe:
216 failureThreshold: 3
217 httpGet:
218 path: /_/health
219 port: 8080
220 scheme: HTTP
221 initialDelaySeconds: 2
222 periodSeconds: 2
223 successThreshold: 1
224 timeoutSeconds: 1
225 resources: {}
226 securityContext:
227 allowPrivilegeEscalation: false
228 readOnlyRootFilesystem: false
229 terminationMessagePath: /dev/termination-log
230 terminationMessagePolicy: File
231 dnsPolicy: ClusterFirst
232 enableServiceLinks: false
233 restartPolicy: Always
234 schedulerName: default-scheduler
235 securityContext: {}
236 terminationGracePeriodSeconds: 30
237status:
238 conditions:
239 - lastTransitionTime: "2022-03-17T12:47:35Z"
240 lastUpdateTime: "2022-03-17T12:47:35Z"
241 message: Deployment does not have minimum availability.
242 reason: MinimumReplicasUnavailable
243 status: "False"
244 type: Available
245 - lastTransitionTime: "2022-03-20T12:16:56Z"
246 lastUpdateTime: "2022-03-20T12:16:56Z"
247 message: ReplicaSet "hello-openfaas2-5d6c7c7fb4" has timed out progressing.
248 reason: ProgressDeadlineExceeded
249 status: "False"
250 type: Progressing
251 observedGeneration: 6
252 replicas: 2
253 unavailableReplicas: 2
254 updatedReplicas: 1
255docker@minikube:~$ docker run --name wm -ti wm/hello-openfaas2:0.1
2562022/03/20 13:04:52 Version: 0.2.0 SHA: 56bf6aac54deb3863a690f5fc03a2a38e7d9e6ef
2572022/03/20 13:04:52 Timeouts: read: 5s write: 5s hard: 0s health: 5s.
2582022/03/20 13:04:52 Listening on port: 8080
259...
260
261
and another shell
1$ docker images | grep hello-openfaas
2wm/hello-openfaas latest bd08d01ce09b 34 minutes ago 65.2MB
3$ faas-cli deploy -f ./hello-openfaas.yml
4Deploying: hello-openfaas.
5WARNING! You are not using an encrypted connection to the gateway, consider using HTTPS.
6
7Deployed. 202 Accepted.
8URL: http://IP:8099/function/hello-openfaas
9see the helm chart for how to set the ImagePullPolicy
10$ kubectl logs -n openfaas-fn deploy/hello-openfaas
11Error from server (BadRequest): container "hello-openfaas" in pod "hello-openfaas-558f99477f-wd697" is waiting to start: trying and failing to pull image
12
13$ kubectl describe -n openfaas-fn deploy/hello-openfaas
14Name: hello-openfaas
15Namespace: openfaas-fn
16CreationTimestamp: Wed, 16 Mar 2022 14:59:49 +0800
17Labels: faas_function=hello-openfaas
18Annotations: deployment.kubernetes.io/revision: 1
19 prometheus.io.scrape: false
20Selector: faas_function=hello-openfaas
21Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable
22StrategyType: RollingUpdate
23MinReadySeconds: 0
24RollingUpdateStrategy: 0 max unavailable, 1 max surge
25Pod Template:
26 Labels: faas_function=hello-openfaas
27 Annotations: prometheus.io.scrape: false
28 Containers:
29 hello-openfaas:
30 Image: wm/hello-openfaas:latest
31 Port: 8080/TCP
32 Host Port: 0/TCP
33 Liveness: http-get http://:8080/_/health delay=2s timeout=1s period=2s #success=1 #failure=3
34 Readiness: http-get http://:8080/_/health delay=2s timeout=1s period=2s #success=1 #failure=3
35 Environment:
36 fprocess: python3 index.py
37 Mounts: <none>
38 Volumes: <none>
39Conditions:
40 Type Status Reason
41 ---- ------ ------
42 Available False MinimumReplicasUnavailable
43 Progressing False ProgressDeadlineExceeded
44OldReplicaSets: <none>
45NewReplicaSet: hello-openfaas-558f99477f (1/1 replicas created)
46Events:
47 Type Reason Age From Message
48 ---- ------ ---- ---- -------
49 Normal ScalingReplicaSet 29m deployment-controller Scaled up replica set hello-openfaas-558f99477f to 1
50version: 1.0
51provider:
52 name: openfaas
53 gateway: http://IP:8099
54functions:
55 hello-openfaas:
56 lang: python3
57 handler: ./hello-openfaas
58 image: wm/hello-openfaas:latest
59 imagePullPolicy: Never
60$ faas-cli new --lang python3 hello-openfaas2 --prefix="wm"
61Folder: hello-openfaas2 created.
62# I add `imagePullPolicy: Never` to `hello-openfaas2.yml`
63$ faas-cli build -f ./hello-openfaas2.yml
64$ faas-cli deploy -f ./hello-openfaas2.yml
65Deploying: hello-openfaas2.
66WARNING! You are not using an encrypted connection to the gateway, consider using HTTPS.
67
68Deployed. 202 Accepted.
69URL: http://192.168.1.3:8099/function/hello-openfaas2
70
71
72$ kubectl logs -n openfaas-fn deploy/hello-openfaas2
73Error from server (BadRequest): container "hello-openfaas2" in pod "hello-openfaas2-7c67488865-7d7vm" is waiting to start: image can't be pulled
74
75$ kubectl get pods --all-namespaces
76NAMESPACE NAME READY STATUS RESTARTS AGE
77kube-system coredns-64897985d-kp7vf 1/1 Running 0 47h
78...
79openfaas-fn env-6c79f7b946-bzbtm 1/1 Running 0 4h28m
80openfaas-fn figlet-54db496f88-957xl 1/1 Running 0 18h
81openfaas-fn hello-openfaas-547857b9d6-z277c 0/1 ImagePullBackOff 0 127m
82openfaas-fn hello-openfaas-7b6946b4f9-hcvq4 0/1 ImagePullBackOff 0 165m
83openfaas-fn hello-openfaas2-7c67488865-qmrkl 0/1 ImagePullBackOff 0 13m
84openfaas-fn hello-openfaas3-65847b8b67-b94kd 0/1 ImagePullBackOff 0 97m
85openfaas-fn hello-python-554b464498-zxcdv 0/1 ErrImagePull 0 3h23m
86openfaas-fn hello-python-8698bc68bd-62gh9 0/1 ImagePullBackOff 0 3h25m
87$ docker images
88REPOSITORY TAG IMAGE ID CREATED SIZE
89wm/hello-openfaas2 0.1 03c21bd96d5e About an hour ago 65.2MB
90python 3-alpine 69fba17b9bae 12 days ago 48.6MB
91ghcr.io/openfaas/figlet latest ca5eef0de441 2 weeks ago 14.8MB
92ghcr.io/openfaas/alpine latest 35f3d4be6bb8 2 weeks ago 14.2MB
93ghcr.io/openfaas/faas-netes 0.14.2 524b510505ec 3 weeks ago 77.3MB
94k8s.gcr.io/kube-apiserver v1.23.3 f40be0088a83 7 weeks ago 135MB
95k8s.gcr.io/kube-controller-manager v1.23.3 b07520cd7ab7 7 weeks ago 125MB
96k8s.gcr.io/kube-scheduler v1.23.3 99a3486be4f2 7 weeks ago 53.5MB
97k8s.gcr.io/kube-proxy v1.23.3 9b7cc9982109 7 weeks ago 112MB
98ghcr.io/openfaas/gateway 0.21.3 ab4851262cd1 7 weeks ago 30.6MB
99ghcr.io/openfaas/basic-auth 0.21.3 16e7168a17a3 7 weeks ago 14.3MB
100k8s.gcr.io/etcd 3.5.1-0 25f8c7f3da61 4 months ago 293MB
101ghcr.io/openfaas/classic-watchdog 0.2.0 6f97aa96da81 4 months ago 8.18MB
102k8s.gcr.io/coredns/coredns v1.8.6 a4ca41631cc7 5 months ago 46.8MB
103k8s.gcr.io/pause 3.6 6270bb605e12 6 months ago 683kB
104ghcr.io/openfaas/queue-worker 0.12.2 56e7216201bc 7 months ago 7.97MB
105kubernetesui/dashboard v2.3.1 e1482a24335a 9 months ago 220MB
106kubernetesui/metrics-scraper v1.0.7 7801cfc6d5c0 9 months ago 34.4MB
107nats-streaming 0.22.0 12f2d32e0c9a 9 months ago 19.8MB
108gcr.io/k8s-minikube/storage-provisioner v5 6e38f40d628d 11 months ago 31.5MB
109functions/markdown-render latest 93b5da182216 2 years ago 24.6MB
110functions/hubstats latest 01affa91e9e4 2 years ago 29.3MB
111functions/nodeinfo latest 2fe8a87bf79c 2 years ago 71.4MB
112functions/alpine latest 46c6f6d74471 2 years ago 21.5MB
113prom/prometheus v2.11.0 b97ed892eb23 2 years ago 126MB
114prom/alertmanager v0.18.0 ce3c87f17369 2 years ago 51.9MB
115alexellis2/openfaas-colorization 0.4.1 d36b67b1b5c1 2 years ago 1.84GB
116rorpage/text-to-speech latest 5dc20810eb54 2 years ago 86.9MB
117stefanprodan/faas-grafana 4.6.3 2a4bd9caea50 4 years ago 284MB
118
119$ kubectl get pods --all-namespaces
120NAMESPACE NAME READY STATUS RESTARTS AGE
121kube-system coredns-64897985d-kp7vf 1/1 Running 0 6d
122kube-system etcd-minikube 1/1 Running 0 6d
123kube-system kube-apiserver-minikube 1/1 Running 0 6d
124kube-system kube-controller-manager-minikube 1/1 Running 0 6d
125kube-system kube-proxy-5m8lr 1/1 Running 0 6d
126kube-system kube-scheduler-minikube 1/1 Running 0 6d
127kube-system storage-provisioner 1/1 Running 1 (6d ago) 6d
128kubernetes-dashboard dashboard-metrics-scraper-58549894f-97tsv 1/1 Running 0 5d7h
129kubernetes-dashboard kubernetes-dashboard-ccd587f44-lkwcx 1/1 Running 0 5d7h
130openfaas-fn base64-6bdbcdb64c-djz8f 1/1 Running 0 5d1h
131openfaas-fn colorise-85c74c686b-2fz66 1/1 Running 0 4d5h
132openfaas-fn echoit-5d7df6684c-k6ljn 1/1 Running 0 5d1h
133openfaas-fn env-6c79f7b946-bzbtm 1/1 Running 0 4d5h
134openfaas-fn figlet-54db496f88-957xl 1/1 Running 0 4d19h
135openfaas-fn hello-openfaas-547857b9d6-z277c 0/1 ImagePullBackOff 0 4d3h
136openfaas-fn hello-openfaas-7b6946b4f9-hcvq4 0/1 ImagePullBackOff 0 4d3h
137openfaas-fn hello-openfaas2-5c6f6cb5d9-24hkz 0/1 ImagePullBackOff 0 9m22s
138openfaas-fn hello-openfaas2-8957bb47b-7cgjg 0/1 ImagePullBackOff 0 2d22h
139openfaas-fn hello-openfaas3-65847b8b67-b94kd 0/1 ImagePullBackOff 0 4d2h
140openfaas-fn hello-python-6d6976845f-cwsln 0/1 ImagePullBackOff 0 3d19h
141openfaas-fn hello-python-b577cb8dc-64wf5 0/1 ImagePullBackOff 0 3d9h
142openfaas-fn hubstats-b6cd4dccc-z8tvl 1/1 Running 0 5d1h
143openfaas-fn markdown-68f69f47c8-w5m47 1/1 Running 0 5d1h
144openfaas-fn nodeinfo-d48cbbfcc-hfj79 1/1 Running 0 5d1h
145openfaas-fn openfaas2-fun 1/1 Running 0 15s
146openfaas-fn text-to-speech-74ffcdfd7-997t4 0/1 CrashLoopBackOff 2235 (3s ago) 4d5h
147openfaas-fn wordcount-6489865566-cvfzr 1/1 Running 0 5d1h
148openfaas alertmanager-88449c789-fq2rg 1/1 Running 0 3d1h
149openfaas basic-auth-plugin-75fd7d69c5-zw4jh 1/1 Running 0 3d2h
150openfaas gateway-5c4bb7c5d7-n8h27 2/2 Running 0 3d2h
151openfaas grafana 1/1 Running 0 4d8h
152openfaas nats-647b476664-hkr7p 1/1 Running 0 3d2h
153openfaas prometheus-687648749f-tl8jp 1/1 Running 0 3d1h
154openfaas queue-worker-7777ffd7f6-htx6t 1/1 Running 0 3d2h
155
156
157$ kubectl get -o yaml -n openfaas-fn deploy/hello-openfaas2
158apiVersion: apps/v1
159kind: Deployment
160metadata:
161 annotations:
162 deployment.kubernetes.io/revision: "6"
163 prometheus.io.scrape: "false"
164 creationTimestamp: "2022-03-17T12:47:35Z"
165 generation: 6
166 labels:
167 faas_function: hello-openfaas2
168 name: hello-openfaas2
169 namespace: openfaas-fn
170 resourceVersion: "400833"
171 uid: 9c4e9d26-23af-4f93-8538-4e2d96f0d7e0
172spec:
173 progressDeadlineSeconds: 600
174 replicas: 1
175 revisionHistoryLimit: 10
176 selector:
177 matchLabels:
178 faas_function: hello-openfaas2
179 strategy:
180 rollingUpdate:
181 maxSurge: 1
182 maxUnavailable: 0
183 type: RollingUpdate
184 template:
185 metadata:
186 annotations:
187 prometheus.io.scrape: "false"
188 creationTimestamp: null
189 labels:
190 faas_function: hello-openfaas2
191 uid: "969512830"
192 name: hello-openfaas2
193 spec:
194 containers:
195 - env:
196 - name: fprocess
197 value: python3 index.py
198 image: wm/hello-openfaas2:0.1
199 imagePullPolicy: Always
200 livenessProbe:
201 failureThreshold: 3
202 httpGet:
203 path: /_/health
204 port: 8080
205 scheme: HTTP
206 initialDelaySeconds: 2
207 periodSeconds: 2
208 successThreshold: 1
209 timeoutSeconds: 1
210 name: hello-openfaas2
211 ports:
212 - containerPort: 8080
213 name: http
214 protocol: TCP
215 readinessProbe:
216 failureThreshold: 3
217 httpGet:
218 path: /_/health
219 port: 8080
220 scheme: HTTP
221 initialDelaySeconds: 2
222 periodSeconds: 2
223 successThreshold: 1
224 timeoutSeconds: 1
225 resources: {}
226 securityContext:
227 allowPrivilegeEscalation: false
228 readOnlyRootFilesystem: false
229 terminationMessagePath: /dev/termination-log
230 terminationMessagePolicy: File
231 dnsPolicy: ClusterFirst
232 enableServiceLinks: false
233 restartPolicy: Always
234 schedulerName: default-scheduler
235 securityContext: {}
236 terminationGracePeriodSeconds: 30
237status:
238 conditions:
239 - lastTransitionTime: "2022-03-17T12:47:35Z"
240 lastUpdateTime: "2022-03-17T12:47:35Z"
241 message: Deployment does not have minimum availability.
242 reason: MinimumReplicasUnavailable
243 status: "False"
244 type: Available
245 - lastTransitionTime: "2022-03-20T12:16:56Z"
246 lastUpdateTime: "2022-03-20T12:16:56Z"
247 message: ReplicaSet "hello-openfaas2-5d6c7c7fb4" has timed out progressing.
248 reason: ProgressDeadlineExceeded
249 status: "False"
250 type: Progressing
251 observedGeneration: 6
252 replicas: 2
253 unavailableReplicas: 2
254 updatedReplicas: 1
255docker@minikube:~$ docker run --name wm -ti wm/hello-openfaas2:0.1
2562022/03/20 13:04:52 Version: 0.2.0 SHA: 56bf6aac54deb3863a690f5fc03a2a38e7d9e6ef
2572022/03/20 13:04:52 Timeouts: read: 5s write: 5s hard: 0s health: 5s.
2582022/03/20 13:04:52 Listening on port: 8080
259...
260
261docker@minikube:~$ docker ps | grep wm
262d7796286641c wm/hello-openfaas2:0.1 "fwatchdog" 3 minutes ago Up 3 minutes (healthy) 8080/tcp wm
263
ANSWER
Answered 2022-Mar-16 at 08:10If your image has a latest
tag, the Pod's ImagePullPolicy
will be automatically set to Always
. Each time the pod is created, Kubernetes tries to pull the newest image.
Try not tagging the image as latest
or manually setting the Pod's ImagePullPolicy
to Never
.
If you're using static manifest to create a Pod, the setting will be like the following:
1$ docker images | grep hello-openfaas
2wm/hello-openfaas latest bd08d01ce09b 34 minutes ago 65.2MB
3$ faas-cli deploy -f ./hello-openfaas.yml
4Deploying: hello-openfaas.
5WARNING! You are not using an encrypted connection to the gateway, consider using HTTPS.
6
7Deployed. 202 Accepted.
8URL: http://IP:8099/function/hello-openfaas
9see the helm chart for how to set the ImagePullPolicy
10$ kubectl logs -n openfaas-fn deploy/hello-openfaas
11Error from server (BadRequest): container "hello-openfaas" in pod "hello-openfaas-558f99477f-wd697" is waiting to start: trying and failing to pull image
12
13$ kubectl describe -n openfaas-fn deploy/hello-openfaas
14Name: hello-openfaas
15Namespace: openfaas-fn
16CreationTimestamp: Wed, 16 Mar 2022 14:59:49 +0800
17Labels: faas_function=hello-openfaas
18Annotations: deployment.kubernetes.io/revision: 1
19 prometheus.io.scrape: false
20Selector: faas_function=hello-openfaas
21Replicas: 1 desired | 1 updated | 1 total | 0 available | 1 unavailable
22StrategyType: RollingUpdate
23MinReadySeconds: 0
24RollingUpdateStrategy: 0 max unavailable, 1 max surge
25Pod Template:
26 Labels: faas_function=hello-openfaas
27 Annotations: prometheus.io.scrape: false
28 Containers:
29 hello-openfaas:
30 Image: wm/hello-openfaas:latest
31 Port: 8080/TCP
32 Host Port: 0/TCP
33 Liveness: http-get http://:8080/_/health delay=2s timeout=1s period=2s #success=1 #failure=3
34 Readiness: http-get http://:8080/_/health delay=2s timeout=1s period=2s #success=1 #failure=3
35 Environment:
36 fprocess: python3 index.py
37 Mounts: <none>
38 Volumes: <none>
39Conditions:
40 Type Status Reason
41 ---- ------ ------
42 Available False MinimumReplicasUnavailable
43 Progressing False ProgressDeadlineExceeded
44OldReplicaSets: <none>
45NewReplicaSet: hello-openfaas-558f99477f (1/1 replicas created)
46Events:
47 Type Reason Age From Message
48 ---- ------ ---- ---- -------
49 Normal ScalingReplicaSet 29m deployment-controller Scaled up replica set hello-openfaas-558f99477f to 1
50version: 1.0
51provider:
52 name: openfaas
53 gateway: http://IP:8099
54functions:
55 hello-openfaas:
56 lang: python3
57 handler: ./hello-openfaas
58 image: wm/hello-openfaas:latest
59 imagePullPolicy: Never
60$ faas-cli new --lang python3 hello-openfaas2 --prefix="wm"
61Folder: hello-openfaas2 created.
62# I add `imagePullPolicy: Never` to `hello-openfaas2.yml`
63$ faas-cli build -f ./hello-openfaas2.yml
64$ faas-cli deploy -f ./hello-openfaas2.yml
65Deploying: hello-openfaas2.
66WARNING! You are not using an encrypted connection to the gateway, consider using HTTPS.
67
68Deployed. 202 Accepted.
69URL: http://192.168.1.3:8099/function/hello-openfaas2
70
71
72$ kubectl logs -n openfaas-fn deploy/hello-openfaas2
73Error from server (BadRequest): container "hello-openfaas2" in pod "hello-openfaas2-7c67488865-7d7vm" is waiting to start: image can't be pulled
74
75$ kubectl get pods --all-namespaces
76NAMESPACE NAME READY STATUS RESTARTS AGE
77kube-system coredns-64897985d-kp7vf 1/1 Running 0 47h
78...
79openfaas-fn env-6c79f7b946-bzbtm 1/1 Running 0 4h28m
80openfaas-fn figlet-54db496f88-957xl 1/1 Running 0 18h
81openfaas-fn hello-openfaas-547857b9d6-z277c 0/1 ImagePullBackOff 0 127m
82openfaas-fn hello-openfaas-7b6946b4f9-hcvq4 0/1 ImagePullBackOff 0 165m
83openfaas-fn hello-openfaas2-7c67488865-qmrkl 0/1 ImagePullBackOff 0 13m
84openfaas-fn hello-openfaas3-65847b8b67-b94kd 0/1 ImagePullBackOff 0 97m
85openfaas-fn hello-python-554b464498-zxcdv 0/1 ErrImagePull 0 3h23m
86openfaas-fn hello-python-8698bc68bd-62gh9 0/1 ImagePullBackOff 0 3h25m
87$ docker images
88REPOSITORY TAG IMAGE ID CREATED SIZE
89wm/hello-openfaas2 0.1 03c21bd96d5e About an hour ago 65.2MB
90python 3-alpine 69fba17b9bae 12 days ago 48.6MB
91ghcr.io/openfaas/figlet latest ca5eef0de441 2 weeks ago 14.8MB
92ghcr.io/openfaas/alpine latest 35f3d4be6bb8 2 weeks ago 14.2MB
93ghcr.io/openfaas/faas-netes 0.14.2 524b510505ec 3 weeks ago 77.3MB
94k8s.gcr.io/kube-apiserver v1.23.3 f40be0088a83 7 weeks ago 135MB
95k8s.gcr.io/kube-controller-manager v1.23.3 b07520cd7ab7 7 weeks ago 125MB
96k8s.gcr.io/kube-scheduler v1.23.3 99a3486be4f2 7 weeks ago 53.5MB
97k8s.gcr.io/kube-proxy v1.23.3 9b7cc9982109 7 weeks ago 112MB
98ghcr.io/openfaas/gateway 0.21.3 ab4851262cd1 7 weeks ago 30.6MB
99ghcr.io/openfaas/basic-auth 0.21.3 16e7168a17a3 7 weeks ago 14.3MB
100k8s.gcr.io/etcd 3.5.1-0 25f8c7f3da61 4 months ago 293MB
101ghcr.io/openfaas/classic-watchdog 0.2.0 6f97aa96da81 4 months ago 8.18MB
102k8s.gcr.io/coredns/coredns v1.8.6 a4ca41631cc7 5 months ago 46.8MB
103k8s.gcr.io/pause 3.6 6270bb605e12 6 months ago 683kB
104ghcr.io/openfaas/queue-worker 0.12.2 56e7216201bc 7 months ago 7.97MB
105kubernetesui/dashboard v2.3.1 e1482a24335a 9 months ago 220MB
106kubernetesui/metrics-scraper v1.0.7 7801cfc6d5c0 9 months ago 34.4MB
107nats-streaming 0.22.0 12f2d32e0c9a 9 months ago 19.8MB
108gcr.io/k8s-minikube/storage-provisioner v5 6e38f40d628d 11 months ago 31.5MB
109functions/markdown-render latest 93b5da182216 2 years ago 24.6MB
110functions/hubstats latest 01affa91e9e4 2 years ago 29.3MB
111functions/nodeinfo latest 2fe8a87bf79c 2 years ago 71.4MB
112functions/alpine latest 46c6f6d74471 2 years ago 21.5MB
113prom/prometheus v2.11.0 b97ed892eb23 2 years ago 126MB
114prom/alertmanager v0.18.0 ce3c87f17369 2 years ago 51.9MB
115alexellis2/openfaas-colorization 0.4.1 d36b67b1b5c1 2 years ago 1.84GB
116rorpage/text-to-speech latest 5dc20810eb54 2 years ago 86.9MB
117stefanprodan/faas-grafana 4.6.3 2a4bd9caea50 4 years ago 284MB
118
119$ kubectl get pods --all-namespaces
120NAMESPACE NAME READY STATUS RESTARTS AGE
121kube-system coredns-64897985d-kp7vf 1/1 Running 0 6d
122kube-system etcd-minikube 1/1 Running 0 6d
123kube-system kube-apiserver-minikube 1/1 Running 0 6d
124kube-system kube-controller-manager-minikube 1/1 Running 0 6d
125kube-system kube-proxy-5m8lr 1/1 Running 0 6d
126kube-system kube-scheduler-minikube 1/1 Running 0 6d
127kube-system storage-provisioner 1/1 Running 1 (6d ago) 6d
128kubernetes-dashboard dashboard-metrics-scraper-58549894f-97tsv 1/1 Running 0 5d7h
129kubernetes-dashboard kubernetes-dashboard-ccd587f44-lkwcx 1/1 Running 0 5d7h
130openfaas-fn base64-6bdbcdb64c-djz8f 1/1 Running 0 5d1h
131openfaas-fn colorise-85c74c686b-2fz66 1/1 Running 0 4d5h
132openfaas-fn echoit-5d7df6684c-k6ljn 1/1 Running 0 5d1h
133openfaas-fn env-6c79f7b946-bzbtm 1/1 Running 0 4d5h
134openfaas-fn figlet-54db496f88-957xl 1/1 Running 0 4d19h
135openfaas-fn hello-openfaas-547857b9d6-z277c 0/1 ImagePullBackOff 0 4d3h
136openfaas-fn hello-openfaas-7b6946b4f9-hcvq4 0/1 ImagePullBackOff 0 4d3h
137openfaas-fn hello-openfaas2-5c6f6cb5d9-24hkz 0/1 ImagePullBackOff 0 9m22s
138openfaas-fn hello-openfaas2-8957bb47b-7cgjg 0/1 ImagePullBackOff 0 2d22h
139openfaas-fn hello-openfaas3-65847b8b67-b94kd 0/1 ImagePullBackOff 0 4d2h
140openfaas-fn hello-python-6d6976845f-cwsln 0/1 ImagePullBackOff 0 3d19h
141openfaas-fn hello-python-b577cb8dc-64wf5 0/1 ImagePullBackOff 0 3d9h
142openfaas-fn hubstats-b6cd4dccc-z8tvl 1/1 Running 0 5d1h
143openfaas-fn markdown-68f69f47c8-w5m47 1/1 Running 0 5d1h
144openfaas-fn nodeinfo-d48cbbfcc-hfj79 1/1 Running 0 5d1h
145openfaas-fn openfaas2-fun 1/1 Running 0 15s
146openfaas-fn text-to-speech-74ffcdfd7-997t4 0/1 CrashLoopBackOff 2235 (3s ago) 4d5h
147openfaas-fn wordcount-6489865566-cvfzr 1/1 Running 0 5d1h
148openfaas alertmanager-88449c789-fq2rg 1/1 Running 0 3d1h
149openfaas basic-auth-plugin-75fd7d69c5-zw4jh 1/1 Running 0 3d2h
150openfaas gateway-5c4bb7c5d7-n8h27 2/2 Running 0 3d2h
151openfaas grafana 1/1 Running 0 4d8h
152openfaas nats-647b476664-hkr7p 1/1 Running 0 3d2h
153openfaas prometheus-687648749f-tl8jp 1/1 Running 0 3d1h
154openfaas queue-worker-7777ffd7f6-htx6t 1/1 Running 0 3d2h
155
156
157$ kubectl get -o yaml -n openfaas-fn deploy/hello-openfaas2
158apiVersion: apps/v1
159kind: Deployment
160metadata:
161 annotations:
162 deployment.kubernetes.io/revision: "6"
163 prometheus.io.scrape: "false"
164 creationTimestamp: "2022-03-17T12:47:35Z"
165 generation: 6
166 labels:
167 faas_function: hello-openfaas2
168 name: hello-openfaas2
169 namespace: openfaas-fn
170 resourceVersion: "400833"
171 uid: 9c4e9d26-23af-4f93-8538-4e2d96f0d7e0
172spec:
173 progressDeadlineSeconds: 600
174 replicas: 1
175 revisionHistoryLimit: 10
176 selector:
177 matchLabels:
178 faas_function: hello-openfaas2
179 strategy:
180 rollingUpdate:
181 maxSurge: 1
182 maxUnavailable: 0
183 type: RollingUpdate
184 template:
185 metadata:
186 annotations:
187 prometheus.io.scrape: "false"
188 creationTimestamp: null
189 labels:
190 faas_function: hello-openfaas2
191 uid: "969512830"
192 name: hello-openfaas2
193 spec:
194 containers:
195 - env:
196 - name: fprocess
197 value: python3 index.py
198 image: wm/hello-openfaas2:0.1
199 imagePullPolicy: Always
200 livenessProbe:
201 failureThreshold: 3
202 httpGet:
203 path: /_/health
204 port: 8080
205 scheme: HTTP
206 initialDelaySeconds: 2
207 periodSeconds: 2
208 successThreshold: 1
209 timeoutSeconds: 1
210 name: hello-openfaas2
211 ports:
212 - containerPort: 8080
213 name: http
214 protocol: TCP
215 readinessProbe:
216 failureThreshold: 3
217 httpGet:
218 path: /_/health
219 port: 8080
220 scheme: HTTP
221 initialDelaySeconds: 2
222 periodSeconds: 2
223 successThreshold: 1
224 timeoutSeconds: 1
225 resources: {}
226 securityContext:
227 allowPrivilegeEscalation: false
228 readOnlyRootFilesystem: false
229 terminationMessagePath: /dev/termination-log
230 terminationMessagePolicy: File
231 dnsPolicy: ClusterFirst
232 enableServiceLinks: false
233 restartPolicy: Always
234 schedulerName: default-scheduler
235 securityContext: {}
236 terminationGracePeriodSeconds: 30
237status:
238 conditions:
239 - lastTransitionTime: "2022-03-17T12:47:35Z"
240 lastUpdateTime: "2022-03-17T12:47:35Z"
241 message: Deployment does not have minimum availability.
242 reason: MinimumReplicasUnavailable
243 status: "False"
244 type: Available
245 - lastTransitionTime: "2022-03-20T12:16:56Z"
246 lastUpdateTime: "2022-03-20T12:16:56Z"
247 message: ReplicaSet "hello-openfaas2-5d6c7c7fb4" has timed out progressing.
248 reason: ProgressDeadlineExceeded
249 status: "False"
250 type: Progressing
251 observedGeneration: 6
252 replicas: 2
253 unavailableReplicas: 2
254 updatedReplicas: 1
255docker@minikube:~$ docker run --name wm -ti wm/hello-openfaas2:0.1
2562022/03/20 13:04:52 Version: 0.2.0 SHA: 56bf6aac54deb3863a690f5fc03a2a38e7d9e6ef
2572022/03/20 13:04:52 Timeouts: read: 5s write: 5s hard: 0s health: 5s.
2582022/03/20 13:04:52 Listening on port: 8080
259...
260
261docker@minikube:~$ docker ps | grep wm
262d7796286641c wm/hello-openfaas2:0.1 "fwatchdog" 3 minutes ago Up 3 minutes (healthy) 8080/tcp wm
263containers:
264 - name: test-container
265 image: testImage:latest
266 imagePullPolicy: Never
267
QUESTION
IndexError: tuple index out of range when I try to create an executable from a python script using auto-py-to-exe
Asked 2022-Feb-24 at 15:03I have been trying out an open-sourced personal AI assistant script. The script works fine but I want to create an executable so that I can gift the executable to one of my friends. However, when I try to create the executable using the auto-py-to-exe, it states the below error:
1Running auto-py-to-exe v2.10.1
2Building directory: C:\Users\Tarun\AppData\Local\Temp\tmpjaw1ky1x
3Provided command: pyinstaller --noconfirm --onedir --console --no-embed-manifest "C:/Users/Tarun/AppData/Local/Programs/Python/Python310/AI_Ass.py"
4Recursion Limit is set to 5000
5Executing: pyinstaller --noconfirm --onedir --console --no-embed-manifest C:/Users/Tarun/AppData/Local/Programs/Python/Python310/AI_Ass.py --distpath C:\Users\Tarun\AppData\Local\Temp\tmpjaw1ky1x\application --workpath C:\Users\Tarun\AppData\Local\Temp\tmpjaw1ky1x\build --specpath C:\Users\Tarun\AppData\Local\Temp\tmpjaw1ky1x
6
742681 INFO: PyInstaller: 4.6
842690 INFO: Python: 3.10.0
942732 INFO: Platform: Windows-10-10.0.19042-SP0
1042744 INFO: wrote C:\Users\Tarun\AppData\Local\Temp\tmpjaw1ky1x\AI_Ass.spec
1142764 INFO: UPX is not available.
1242772 INFO: Extending PYTHONPATH with paths
13['C:\\Users\\Tarun\\AppData\\Local\\Programs\\Python\\Python310']
1443887 INFO: checking Analysis
1543891 INFO: Building Analysis because Analysis-00.toc is non existent
1643895 INFO: Initializing module dependency graph...
1743915 INFO: Caching module graph hooks...
1843975 INFO: Analyzing base_library.zip ...
1954298 INFO: Processing pre-find module path hook distutils from 'C:\\Users\\Tarun\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\PyInstaller\\hooks\\pre_find_module_path\\hook-distutils.py'.
2054306 INFO: distutils: retargeting to non-venv dir 'C:\\Users\\Tarun\\AppData\\Local\\Programs\\Python\\Python310\\lib'
2157474 INFO: Caching module dependency graph...
2258088 INFO: running Analysis Analysis-00.toc
2358132 INFO: Adding Microsoft.Windows.Common-Controls to dependent assemblies of final executable
24 required by C:\Users\Tarun\AppData\Local\Programs\Python\Python310\python.exe
2558365 INFO: Analyzing C:\Users\Tarun\AppData\Local\Programs\Python\Python310\AI_Ass.py
2659641 INFO: Processing pre-safe import module hook urllib3.packages.six.moves from 'C:\\Users\\Tarun\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\PyInstaller\\hooks\\pre_safe_import_module\\hook-urllib3.packages.six.moves.py'.
27An error occurred while packaging
28Traceback (most recent call last):
29 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\auto_py_to_exe\packaging.py", line 131, in package
30 run_pyinstaller()
31 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\__main__.py", line 124, in run
32 run_build(pyi_config, spec_file, **vars(args))
33 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\__main__.py", line 58, in run_build
34 PyInstaller.building.build_main.main(pyi_config, spec_file, **kwargs)
35 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\building\build_main.py", line 782, in main
36 build(specfile, kw.get('distpath'), kw.get('workpath'), kw.get('clean_build'))
37 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\building\build_main.py", line 714, in build
38 exec(code, spec_namespace)
39 File "C:\Users\Tarun\AppData\Local\Temp\tmpjaw1ky1x\AI_Ass.spec", line 7, in <module>
40 a = Analysis(['C:/Users/Tarun/AppData/Local/Programs/Python/Python310/AI_Ass.py'],
41 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\building\build_main.py", line 277, in __init__
42 self.__postinit__()
43 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\building\datastruct.py", line 155, in __postinit__
44 self.assemble()
45 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\building\build_main.py", line 439, in assemble
46 priority_scripts.append(self.graph.add_script(script))
47 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\depend\analysis.py", line 265, in add_script
48 self._top_script_node = super().add_script(pathname)
49 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1433, in add_script
50 self._process_imports(n)
51 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2850, in _process_imports
52 target_module = self._safe_import_hook(*import_info, **kwargs)[0]
53 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2301, in _safe_import_hook
54 target_modules = self.import_hook(
55 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1505, in import_hook
56 target_package, target_module_partname = self._find_head_package(
57 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1684, in _find_head_package
58 target_package = self._safe_import_module(
59 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\depend\analysis.py", line 387, in _safe_import_module
60 return super()._safe_import_module(module_basename, module_name, parent_package)
61 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2062, in _safe_import_module
62 self._process_imports(n)
63 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2850, in _process_imports
64 target_module = self._safe_import_hook(*import_info, **kwargs)[0]
65 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2301, in _safe_import_hook
66 target_modules = self.import_hook(
67 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1505, in import_hook
68 target_package, target_module_partname = self._find_head_package(
69 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1684, in _find_head_package
70 target_package = self._safe_import_module(
71 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\depend\analysis.py", line 387, in _safe_import_module
72 return super()._safe_import_module(module_basename, module_name, parent_package)
73 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2062, in _safe_import_module
74 self._process_imports(n)
75 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2850, in _process_imports
76 target_module = self._safe_import_hook(*import_info, **kwargs)[0]
77 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2301, in _safe_import_hook
78 target_modules = self.import_hook(
79 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1505, in import_hook
80 target_package, target_module_partname = self._find_head_package(
81 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1684, in _find_head_package
82 target_package = self._safe_import_module(
83 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\depend\analysis.py", line 387, in _safe_import_module
84 return super()._safe_import_module(module_basename, module_name, parent_package)
85 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2062, in _safe_import_module
86 self._process_imports(n)
87 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2850, in _process_imports
88 target_module = self._safe_import_hook(*import_info, **kwargs)[0]
89 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2301, in _safe_import_hook
90 target_modules = self.import_hook(
91 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1505, in import_hook
92 target_package, target_module_partname = self._find_head_package(
93 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1684, in _find_head_package
94 target_package = self._safe_import_module(
95 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\depend\analysis.py", line 387, in _safe_import_module
96 return super()._safe_import_module(module_basename, module_name, parent_package)
97 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2062, in _safe_import_module
98 self._process_imports(n)
99 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2850, in _process_imports
100 target_module = self._safe_import_hook(*import_info, **kwargs)[0]
101 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2301, in _safe_import_hook
102 target_modules = self.import_hook(
103 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1518, in import_hook
104 submodule = self._safe_import_module(head, mname, submodule)
105 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\depend\analysis.py", line 387, in _safe_import_module
106 return super()._safe_import_module(module_basename, module_name, parent_package)
107 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2062, in _safe_import_module
108 self._process_imports(n)
109 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2850, in _process_imports
110 target_module = self._safe_import_hook(*import_info, **kwargs)[0]
111 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2301, in _safe_import_hook
112 target_modules = self.import_hook(
113 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1518, in import_hook
114 submodule = self._safe_import_module(head, mname, submodule)
115 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\depend\analysis.py", line 387, in _safe_import_module
116 return super()._safe_import_module(module_basename, module_name, parent_package)
117 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2062, in _safe_import_module
118 self._process_imports(n)
119 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2850, in _process_imports
120 target_module = self._safe_import_hook(*import_info, **kwargs)[0]
121 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2301, in _safe_import_hook
122 target_modules = self.import_hook(
123 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1518, in import_hook
124 submodule = self._safe_import_module(head, mname, submodule)
125 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\depend\analysis.py", line 387, in _safe_import_module
126 return super()._safe_import_module(module_basename, module_name, parent_package)
127 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2061, in _safe_import_module
128 n = self._scan_code(module, co, co_ast)
129 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2645, in _scan_code
130 self._scan_bytecode(
131 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2749, in _scan_bytecode
132 for inst in util.iterate_instructions(module_code_object):
133 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\util.py", line 147, in iterate_instructions
134 yield from iterate_instructions(constant)
135 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\util.py", line 139, in iterate_instructions
136 yield from get_instructions(code_object)
137 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\dis.py", line 338, in _get_instructions_bytes
138 argval, argrepr = _get_const_info(arg, constants)
139 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\dis.py", line 292, in _get_const_info
140 argval = const_list[const_index]
141IndexError: tuple index out of range
142
143Project output will not be moved to output folder
144Complete.
145
I understand that there is a thread already about similar issue but it still doesn't solve the issue. Hence seeking out help
I really have no idea why is the error occurring and how to resolve it. I am pasting the script below for your reference. Can some one please help? Thank you in advance
1Running auto-py-to-exe v2.10.1
2Building directory: C:\Users\Tarun\AppData\Local\Temp\tmpjaw1ky1x
3Provided command: pyinstaller --noconfirm --onedir --console --no-embed-manifest "C:/Users/Tarun/AppData/Local/Programs/Python/Python310/AI_Ass.py"
4Recursion Limit is set to 5000
5Executing: pyinstaller --noconfirm --onedir --console --no-embed-manifest C:/Users/Tarun/AppData/Local/Programs/Python/Python310/AI_Ass.py --distpath C:\Users\Tarun\AppData\Local\Temp\tmpjaw1ky1x\application --workpath C:\Users\Tarun\AppData\Local\Temp\tmpjaw1ky1x\build --specpath C:\Users\Tarun\AppData\Local\Temp\tmpjaw1ky1x
6
742681 INFO: PyInstaller: 4.6
842690 INFO: Python: 3.10.0
942732 INFO: Platform: Windows-10-10.0.19042-SP0
1042744 INFO: wrote C:\Users\Tarun\AppData\Local\Temp\tmpjaw1ky1x\AI_Ass.spec
1142764 INFO: UPX is not available.
1242772 INFO: Extending PYTHONPATH with paths
13['C:\\Users\\Tarun\\AppData\\Local\\Programs\\Python\\Python310']
1443887 INFO: checking Analysis
1543891 INFO: Building Analysis because Analysis-00.toc is non existent
1643895 INFO: Initializing module dependency graph...
1743915 INFO: Caching module graph hooks...
1843975 INFO: Analyzing base_library.zip ...
1954298 INFO: Processing pre-find module path hook distutils from 'C:\\Users\\Tarun\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\PyInstaller\\hooks\\pre_find_module_path\\hook-distutils.py'.
2054306 INFO: distutils: retargeting to non-venv dir 'C:\\Users\\Tarun\\AppData\\Local\\Programs\\Python\\Python310\\lib'
2157474 INFO: Caching module dependency graph...
2258088 INFO: running Analysis Analysis-00.toc
2358132 INFO: Adding Microsoft.Windows.Common-Controls to dependent assemblies of final executable
24 required by C:\Users\Tarun\AppData\Local\Programs\Python\Python310\python.exe
2558365 INFO: Analyzing C:\Users\Tarun\AppData\Local\Programs\Python\Python310\AI_Ass.py
2659641 INFO: Processing pre-safe import module hook urllib3.packages.six.moves from 'C:\\Users\\Tarun\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\PyInstaller\\hooks\\pre_safe_import_module\\hook-urllib3.packages.six.moves.py'.
27An error occurred while packaging
28Traceback (most recent call last):
29 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\auto_py_to_exe\packaging.py", line 131, in package
30 run_pyinstaller()
31 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\__main__.py", line 124, in run
32 run_build(pyi_config, spec_file, **vars(args))
33 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\__main__.py", line 58, in run_build
34 PyInstaller.building.build_main.main(pyi_config, spec_file, **kwargs)
35 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\building\build_main.py", line 782, in main
36 build(specfile, kw.get('distpath'), kw.get('workpath'), kw.get('clean_build'))
37 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\building\build_main.py", line 714, in build
38 exec(code, spec_namespace)
39 File "C:\Users\Tarun\AppData\Local\Temp\tmpjaw1ky1x\AI_Ass.spec", line 7, in <module>
40 a = Analysis(['C:/Users/Tarun/AppData/Local/Programs/Python/Python310/AI_Ass.py'],
41 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\building\build_main.py", line 277, in __init__
42 self.__postinit__()
43 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\building\datastruct.py", line 155, in __postinit__
44 self.assemble()
45 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\building\build_main.py", line 439, in assemble
46 priority_scripts.append(self.graph.add_script(script))
47 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\depend\analysis.py", line 265, in add_script
48 self._top_script_node = super().add_script(pathname)
49 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1433, in add_script
50 self._process_imports(n)
51 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2850, in _process_imports
52 target_module = self._safe_import_hook(*import_info, **kwargs)[0]
53 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2301, in _safe_import_hook
54 target_modules = self.import_hook(
55 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1505, in import_hook
56 target_package, target_module_partname = self._find_head_package(
57 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1684, in _find_head_package
58 target_package = self._safe_import_module(
59 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\depend\analysis.py", line 387, in _safe_import_module
60 return super()._safe_import_module(module_basename, module_name, parent_package)
61 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2062, in _safe_import_module
62 self._process_imports(n)
63 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2850, in _process_imports
64 target_module = self._safe_import_hook(*import_info, **kwargs)[0]
65 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2301, in _safe_import_hook
66 target_modules = self.import_hook(
67 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1505, in import_hook
68 target_package, target_module_partname = self._find_head_package(
69 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1684, in _find_head_package
70 target_package = self._safe_import_module(
71 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\depend\analysis.py", line 387, in _safe_import_module
72 return super()._safe_import_module(module_basename, module_name, parent_package)
73 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2062, in _safe_import_module
74 self._process_imports(n)
75 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2850, in _process_imports
76 target_module = self._safe_import_hook(*import_info, **kwargs)[0]
77 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2301, in _safe_import_hook
78 target_modules = self.import_hook(
79 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1505, in import_hook
80 target_package, target_module_partname = self._find_head_package(
81 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1684, in _find_head_package
82 target_package = self._safe_import_module(
83 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\depend\analysis.py", line 387, in _safe_import_module
84 return super()._safe_import_module(module_basename, module_name, parent_package)
85 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2062, in _safe_import_module
86 self._process_imports(n)
87 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2850, in _process_imports
88 target_module = self._safe_import_hook(*import_info, **kwargs)[0]
89 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2301, in _safe_import_hook
90 target_modules = self.import_hook(
91 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1505, in import_hook
92 target_package, target_module_partname = self._find_head_package(
93 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1684, in _find_head_package
94 target_package = self._safe_import_module(
95 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\depend\analysis.py", line 387, in _safe_import_module
96 return super()._safe_import_module(module_basename, module_name, parent_package)
97 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2062, in _safe_import_module
98 self._process_imports(n)
99 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2850, in _process_imports
100 target_module = self._safe_import_hook(*import_info, **kwargs)[0]
101 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2301, in _safe_import_hook
102 target_modules = self.import_hook(
103 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1518, in import_hook
104 submodule = self._safe_import_module(head, mname, submodule)
105 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\depend\analysis.py", line 387, in _safe_import_module
106 return super()._safe_import_module(module_basename, module_name, parent_package)
107 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2062, in _safe_import_module
108 self._process_imports(n)
109 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2850, in _process_imports
110 target_module = self._safe_import_hook(*import_info, **kwargs)[0]
111 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2301, in _safe_import_hook
112 target_modules = self.import_hook(
113 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1518, in import_hook
114 submodule = self._safe_import_module(head, mname, submodule)
115 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\depend\analysis.py", line 387, in _safe_import_module
116 return super()._safe_import_module(module_basename, module_name, parent_package)
117 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2062, in _safe_import_module
118 self._process_imports(n)
119 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2850, in _process_imports
120 target_module = self._safe_import_hook(*import_info, **kwargs)[0]
121 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2301, in _safe_import_hook
122 target_modules = self.import_hook(
123 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1518, in import_hook
124 submodule = self._safe_import_module(head, mname, submodule)
125 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\depend\analysis.py", line 387, in _safe_import_module
126 return super()._safe_import_module(module_basename, module_name, parent_package)
127 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2061, in _safe_import_module
128 n = self._scan_code(module, co, co_ast)
129 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2645, in _scan_code
130 self._scan_bytecode(
131 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2749, in _scan_bytecode
132 for inst in util.iterate_instructions(module_code_object):
133 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\util.py", line 147, in iterate_instructions
134 yield from iterate_instructions(constant)
135 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\util.py", line 139, in iterate_instructions
136 yield from get_instructions(code_object)
137 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\dis.py", line 338, in _get_instructions_bytes
138 argval, argrepr = _get_const_info(arg, constants)
139 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\dis.py", line 292, in _get_const_info
140 argval = const_list[const_index]
141IndexError: tuple index out of range
142
143Project output will not be moved to output folder
144Complete.
145#importing libraries
146
147import speech_recognition as sr
148import pyttsx3
149import datetime
150import wikipedia
151import webbrowser
152import os
153import time
154import subprocess
155from ecapture import ecapture as ec
156import wolframalpha
157import json
158import requests
159
160#setting up speech engine
161engine=pyttsx3.init('sapi5')
162voices=engine.getProperty('voices')
163engine.setProperty('voice','voices[1].id')
164
165def speak(text):
166 engine.say(text)
167 engine.runAndWait()
168
169#Greet user
170def wishMe():
171 hour=datetime.datetime.now().hour
172 if hour>=0 and hour<12:
173 speak("Hello,Good Morning")
174 print("Hello,Good Morning")
175 elif hour>=12 and hour<18:
176 speak("Hello,Good Afternoon")
177 print("Hello,Good Afternoon")
178 else:
179 speak("Hello,Good Evening")
180 print("Hello,Good Evening")
181
182#Setting up the command function for your AI assistant
183def takeCommand():
184 r=sr.Recognizer()
185 with sr.Microphone() as source:
186 print("Listening...")
187 audio=r.listen(source)
188
189 try:
190 statement=r.recognize_google(audio,language='en-in')
191 print(f"user said:{statement}\n")
192
193 except Exception as e:
194 speak("Pardon me, please say that again")
195 return "None"
196 return statement
197
198print("Loading your AI personal assistant Friday")
199speak("Loading your AI personal assistant Friday")
200wishMe()
201
202#main function
203if __name__=='__main__':
204
205
206 while True:
207 speak("Tell me how can I help you now?")
208 statement = takeCommand().lower()
209 if statement==0:
210 continue
211
212 if "good bye" in statement or "ok bye" in statement or "stop" in statement:
213 speak('your personal assistant Friday is shutting down,Good bye')
214 print('your personal assistant Friday is shutting down,Good bye')
215 break
216
217
218 if 'wikipedia' in statement:
219 speak('Searching Wikipedia...')
220 statement =statement.replace("wikipedia", "")
221 results = wikipedia.summary(statement, sentences=10)
222 webbrowser.open_new_tab("https://en.wikipedia.org/wiki/"+ statement)
223 speak("According to Wikipedia")
224 print(results)
225 speak(results)
226
227 elif 'open youtube' in statement:
228 webbrowser.register('chrome', None,
229 webbrowser.BackgroundBrowser("C://Program Files (x86)//Google//Chrome//Application//chrome.exe"))
230 webbrowser.get('chrome').open_new_tab("https://www.youtube.com")
231 #webbrowser.open_new_tab("https://www.youtube.com")
232 speak("youtube is open now")
233 time.sleep(5)
234
235 elif 'open google' in statement:
236 webbrowser.open_new_tab("https://www.google.com")
237 speak("Google chrome is open now")
238 time.sleep(5)
239
240 elif 'open gmail' in statement:
241 webbrowser.open_new_tab("gmail.com")
242 speak("Google Mail open now")
243 time.sleep(5)
244
245 elif 'time' in statement:
246 strTime=datetime.datetime.now().strftime("%H:%M:%S")
247 speak(f"the time is {strTime}")
248
249 elif 'news' in statement:
250 news = webbrowser.open_new_tab("https://timesofindia.indiatimes.com/home/headlines")
251 speak('Here are some headlines from the Times of India,Happy reading')
252 time.sleep(6)
253
254 elif "camera" in statement or "take a photo" in statement:
255 ec.capture(0,"robo camera","img.jpg")
256
257 elif 'search' in statement:
258 statement = statement.replace("search", "")
259 webbrowser.open_new_tab(statement)
260 time.sleep(5)
261
262 elif 'who are you' in statement or 'what can you do' in statement:
263 speak('I am Friday version 1 point O your personal assistant. I am programmed to minor tasks like'
264 'opening youtube,google chrome, gmail and stackoverflow ,predict time,take a photo,search wikipedia,predict weather'
265 'In different cities, get top headline news from times of india and you can ask me computational or geographical questions too!')
266
267
268 elif "who made you" in statement or "who created you" in statement or "who discovered you" in statement:
269 speak("I was built by Mirthula")
270 print("I was built by Mirthula")
271
272 elif "log off" in statement or "sign out" in statement:
273 speak("Ok , your pc will log off in 10 sec make sure you exit from all applications")
274 subprocess.call(["shutdown", "/l"])
275
276time.sleep(3)
277
ANSWER
Answered 2021-Nov-05 at 02:201Running auto-py-to-exe v2.10.1
2Building directory: C:\Users\Tarun\AppData\Local\Temp\tmpjaw1ky1x
3Provided command: pyinstaller --noconfirm --onedir --console --no-embed-manifest "C:/Users/Tarun/AppData/Local/Programs/Python/Python310/AI_Ass.py"
4Recursion Limit is set to 5000
5Executing: pyinstaller --noconfirm --onedir --console --no-embed-manifest C:/Users/Tarun/AppData/Local/Programs/Python/Python310/AI_Ass.py --distpath C:\Users\Tarun\AppData\Local\Temp\tmpjaw1ky1x\application --workpath C:\Users\Tarun\AppData\Local\Temp\tmpjaw1ky1x\build --specpath C:\Users\Tarun\AppData\Local\Temp\tmpjaw1ky1x
6
742681 INFO: PyInstaller: 4.6
842690 INFO: Python: 3.10.0
942732 INFO: Platform: Windows-10-10.0.19042-SP0
1042744 INFO: wrote C:\Users\Tarun\AppData\Local\Temp\tmpjaw1ky1x\AI_Ass.spec
1142764 INFO: UPX is not available.
1242772 INFO: Extending PYTHONPATH with paths
13['C:\\Users\\Tarun\\AppData\\Local\\Programs\\Python\\Python310']
1443887 INFO: checking Analysis
1543891 INFO: Building Analysis because Analysis-00.toc is non existent
1643895 INFO: Initializing module dependency graph...
1743915 INFO: Caching module graph hooks...
1843975 INFO: Analyzing base_library.zip ...
1954298 INFO: Processing pre-find module path hook distutils from 'C:\\Users\\Tarun\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\PyInstaller\\hooks\\pre_find_module_path\\hook-distutils.py'.
2054306 INFO: distutils: retargeting to non-venv dir 'C:\\Users\\Tarun\\AppData\\Local\\Programs\\Python\\Python310\\lib'
2157474 INFO: Caching module dependency graph...
2258088 INFO: running Analysis Analysis-00.toc
2358132 INFO: Adding Microsoft.Windows.Common-Controls to dependent assemblies of final executable
24 required by C:\Users\Tarun\AppData\Local\Programs\Python\Python310\python.exe
2558365 INFO: Analyzing C:\Users\Tarun\AppData\Local\Programs\Python\Python310\AI_Ass.py
2659641 INFO: Processing pre-safe import module hook urllib3.packages.six.moves from 'C:\\Users\\Tarun\\AppData\\Local\\Programs\\Python\\Python310\\lib\\site-packages\\PyInstaller\\hooks\\pre_safe_import_module\\hook-urllib3.packages.six.moves.py'.
27An error occurred while packaging
28Traceback (most recent call last):
29 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\auto_py_to_exe\packaging.py", line 131, in package
30 run_pyinstaller()
31 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\__main__.py", line 124, in run
32 run_build(pyi_config, spec_file, **vars(args))
33 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\__main__.py", line 58, in run_build
34 PyInstaller.building.build_main.main(pyi_config, spec_file, **kwargs)
35 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\building\build_main.py", line 782, in main
36 build(specfile, kw.get('distpath'), kw.get('workpath'), kw.get('clean_build'))
37 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\building\build_main.py", line 714, in build
38 exec(code, spec_namespace)
39 File "C:\Users\Tarun\AppData\Local\Temp\tmpjaw1ky1x\AI_Ass.spec", line 7, in <module>
40 a = Analysis(['C:/Users/Tarun/AppData/Local/Programs/Python/Python310/AI_Ass.py'],
41 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\building\build_main.py", line 277, in __init__
42 self.__postinit__()
43 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\building\datastruct.py", line 155, in __postinit__
44 self.assemble()
45 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\building\build_main.py", line 439, in assemble
46 priority_scripts.append(self.graph.add_script(script))
47 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\depend\analysis.py", line 265, in add_script
48 self._top_script_node = super().add_script(pathname)
49 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1433, in add_script
50 self._process_imports(n)
51 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2850, in _process_imports
52 target_module = self._safe_import_hook(*import_info, **kwargs)[0]
53 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2301, in _safe_import_hook
54 target_modules = self.import_hook(
55 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1505, in import_hook
56 target_package, target_module_partname = self._find_head_package(
57 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1684, in _find_head_package
58 target_package = self._safe_import_module(
59 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\depend\analysis.py", line 387, in _safe_import_module
60 return super()._safe_import_module(module_basename, module_name, parent_package)
61 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2062, in _safe_import_module
62 self._process_imports(n)
63 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2850, in _process_imports
64 target_module = self._safe_import_hook(*import_info, **kwargs)[0]
65 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2301, in _safe_import_hook
66 target_modules = self.import_hook(
67 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1505, in import_hook
68 target_package, target_module_partname = self._find_head_package(
69 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1684, in _find_head_package
70 target_package = self._safe_import_module(
71 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\depend\analysis.py", line 387, in _safe_import_module
72 return super()._safe_import_module(module_basename, module_name, parent_package)
73 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2062, in _safe_import_module
74 self._process_imports(n)
75 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2850, in _process_imports
76 target_module = self._safe_import_hook(*import_info, **kwargs)[0]
77 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2301, in _safe_import_hook
78 target_modules = self.import_hook(
79 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1505, in import_hook
80 target_package, target_module_partname = self._find_head_package(
81 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1684, in _find_head_package
82 target_package = self._safe_import_module(
83 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\depend\analysis.py", line 387, in _safe_import_module
84 return super()._safe_import_module(module_basename, module_name, parent_package)
85 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2062, in _safe_import_module
86 self._process_imports(n)
87 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2850, in _process_imports
88 target_module = self._safe_import_hook(*import_info, **kwargs)[0]
89 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2301, in _safe_import_hook
90 target_modules = self.import_hook(
91 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1505, in import_hook
92 target_package, target_module_partname = self._find_head_package(
93 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1684, in _find_head_package
94 target_package = self._safe_import_module(
95 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\depend\analysis.py", line 387, in _safe_import_module
96 return super()._safe_import_module(module_basename, module_name, parent_package)
97 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2062, in _safe_import_module
98 self._process_imports(n)
99 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2850, in _process_imports
100 target_module = self._safe_import_hook(*import_info, **kwargs)[0]
101 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2301, in _safe_import_hook
102 target_modules = self.import_hook(
103 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1518, in import_hook
104 submodule = self._safe_import_module(head, mname, submodule)
105 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\depend\analysis.py", line 387, in _safe_import_module
106 return super()._safe_import_module(module_basename, module_name, parent_package)
107 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2062, in _safe_import_module
108 self._process_imports(n)
109 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2850, in _process_imports
110 target_module = self._safe_import_hook(*import_info, **kwargs)[0]
111 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2301, in _safe_import_hook
112 target_modules = self.import_hook(
113 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1518, in import_hook
114 submodule = self._safe_import_module(head, mname, submodule)
115 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\depend\analysis.py", line 387, in _safe_import_module
116 return super()._safe_import_module(module_basename, module_name, parent_package)
117 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2062, in _safe_import_module
118 self._process_imports(n)
119 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2850, in _process_imports
120 target_module = self._safe_import_hook(*import_info, **kwargs)[0]
121 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2301, in _safe_import_hook
122 target_modules = self.import_hook(
123 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 1518, in import_hook
124 submodule = self._safe_import_module(head, mname, submodule)
125 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\depend\analysis.py", line 387, in _safe_import_module
126 return super()._safe_import_module(module_basename, module_name, parent_package)
127 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2061, in _safe_import_module
128 n = self._scan_code(module, co, co_ast)
129 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2645, in _scan_code
130 self._scan_bytecode(
131 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\modulegraph.py", line 2749, in _scan_bytecode
132 for inst in util.iterate_instructions(module_code_object):
133 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\util.py", line 147, in iterate_instructions
134 yield from iterate_instructions(constant)
135 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\site-packages\PyInstaller\lib\modulegraph\util.py", line 139, in iterate_instructions
136 yield from get_instructions(code_object)
137 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\dis.py", line 338, in _get_instructions_bytes
138 argval, argrepr = _get_const_info(arg, constants)
139 File "C:\Users\Tarun\AppData\Local\Programs\Python\Python310\lib\dis.py", line 292, in _get_const_info
140 argval = const_list[const_index]
141IndexError: tuple index out of range
142
143Project output will not be moved to output folder
144Complete.
145#importing libraries
146
147import speech_recognition as sr
148import pyttsx3
149import datetime
150import wikipedia
151import webbrowser
152import os
153import time
154import subprocess
155from ecapture import ecapture as ec
156import wolframalpha
157import json
158import requests
159
160#setting up speech engine
161engine=pyttsx3.init('sapi5')
162voices=engine.getProperty('voices')
163engine.setProperty('voice','voices[1].id')
164
165def speak(text):
166 engine.say(text)
167 engine.runAndWait()
168
169#Greet user
170def wishMe():
171 hour=datetime.datetime.now().hour
172 if hour>=0 and hour<12:
173 speak("Hello,Good Morning")
174 print("Hello,Good Morning")
175 elif hour>=12 and hour<18:
176 speak("Hello,Good Afternoon")
177 print("Hello,Good Afternoon")
178 else:
179 speak("Hello,Good Evening")
180 print("Hello,Good Evening")
181
182#Setting up the command function for your AI assistant
183def takeCommand():
184 r=sr.Recognizer()
185 with sr.Microphone() as source:
186 print("Listening...")
187 audio=r.listen(source)
188
189 try:
190 statement=r.recognize_google(audio,language='en-in')
191 print(f"user said:{statement}\n")
192
193 except Exception as e:
194 speak("Pardon me, please say that again")
195 return "None"
196 return statement
197
198print("Loading your AI personal assistant Friday")
199speak("Loading your AI personal assistant Friday")
200wishMe()
201
202#main function
203if __name__=='__main__':
204
205
206 while True:
207 speak("Tell me how can I help you now?")
208 statement = takeCommand().lower()
209 if statement==0:
210 continue
211
212 if "good bye" in statement or "ok bye" in statement or "stop" in statement:
213 speak('your personal assistant Friday is shutting down,Good bye')
214 print('your personal assistant Friday is shutting down,Good bye')
215 break
216
217
218 if 'wikipedia' in statement:
219 speak('Searching Wikipedia...')
220 statement =statement.replace("wikipedia", "")
221 results = wikipedia.summary(statement, sentences=10)
222 webbrowser.open_new_tab("https://en.wikipedia.org/wiki/"+ statement)
223 speak("According to Wikipedia")
224 print(results)
225 speak(results)
226
227 elif 'open youtube' in statement:
228 webbrowser.register('chrome', None,
229 webbrowser.BackgroundBrowser("C://Program Files (x86)//Google//Chrome//Application//chrome.exe"))
230 webbrowser.get('chrome').open_new_tab("https://www.youtube.com")
231 #webbrowser.open_new_tab("https://www.youtube.com")
232 speak("youtube is open now")
233 time.sleep(5)
234
235 elif 'open google' in statement:
236 webbrowser.open_new_tab("https://www.google.com")
237 speak("Google chrome is open now")
238 time.sleep(5)
239
240 elif 'open gmail' in statement:
241 webbrowser.open_new_tab("gmail.com")
242 speak("Google Mail open now")
243 time.sleep(5)
244
245 elif 'time' in statement:
246 strTime=datetime.datetime.now().strftime("%H:%M:%S")
247 speak(f"the time is {strTime}")
248
249 elif 'news' in statement:
250 news = webbrowser.open_new_tab("https://timesofindia.indiatimes.com/home/headlines")
251 speak('Here are some headlines from the Times of India,Happy reading')
252 time.sleep(6)
253
254 elif "camera" in statement or "take a photo" in statement:
255 ec.capture(0,"robo camera","img.jpg")
256
257 elif 'search' in statement:
258 statement = statement.replace("search", "")
259 webbrowser.open_new_tab(statement)
260 time.sleep(5)
261
262 elif 'who are you' in statement or 'what can you do' in statement:
263 speak('I am Friday version 1 point O your personal assistant. I am programmed to minor tasks like'
264 'opening youtube,google chrome, gmail and stackoverflow ,predict time,take a photo,search wikipedia,predict weather'
265 'In different cities, get top headline news from times of india and you can ask me computational or geographical questions too!')
266
267
268 elif "who made you" in statement or "who created you" in statement or "who discovered you" in statement:
269 speak("I was built by Mirthula")
270 print("I was built by Mirthula")
271
272 elif "log off" in statement or "sign out" in statement:
273 speak("Ok , your pc will log off in 10 sec make sure you exit from all applications")
274 subprocess.call(["shutdown", "/l"])
275
276time.sleep(3)
27742681 INFO: PyInstaller: 4.6
27842690 INFO: Python: 3.10.0
279
There's the issue. Python 3.10.0 has a bug with PyInstaller 4.6. The problem isn't you or PyInstaller. Try converting it using Python 3.9.7 instead. Ironic, considering 3.10.0 was suppose to be a bugfix update.
QUESTION
Google Actions Builder stops execution when selecting a visual item from a List
Asked 2022-Feb-23 at 15:32I'm pulling my hairs here. I have a Google Assistant application that I build with Jovo 4 and Google Actions Builder.
The goal is to create a HelpScene, which shows some options that explain the possibilities/features of the app on selection. This is the response I return from my Webhook. (This is Jovo code, but doesn't matter as this returns a JSON when the Assistant calls the webhook.)
1@Handle(GoogleAssistantHandles.onScene('HelpScene'))
2 showHelpList() {
3 return this.$send({
4 platforms: {
5 googleAssistant: {
6 nativeResponse: {
7 scene: {
8 name: this.jovo.$googleAssistant?.$request.scene?.name,
9 slots: {},
10 next: {
11 name: 'MainScene',
12 },
13 },
14 session: {
15 id: 'session_id',
16 languageCode: 'nl-BE',
17 params: {},
18 typeOverrides: [
19 {
20 name: 'prompt_option',
21 synonym: {
22 entries: [
23 {
24 name: 'ITEM_1',
25 synonyms: ['Item 1', 'First item'],
26 display: {
27 title: 'Item #1',
28 description: 'Description of Item #1',
29 image: {
30 alt: 'Google Assistant logo',
31 height: 0,
32 url: 'https://developers.google.com/assistant/assistant_96.png',
33 width: 0,
34 },
35 },
36 },
37 {
38 name: 'ITEM_2',
39 synonyms: ['Item 2', 'Second item'],
40 display: {
41 title: 'Item #2',
42 description: 'Description of Item #2',
43 image: {
44 alt: 'Google Assistant logo',
45 height: 0,
46 url: 'https://developers.google.com/assistant/assistant_96.png',
47 width: 0,
48 },
49 },
50 },
51 {
52 name: 'ITEM_3',
53 synonyms: ['Item 3', 'Third item'],
54 display: {
55 title: 'Item #3',
56 description: 'Description of Item #3',
57 image: {
58 alt: 'Google Assistant logo',
59 height: 0,
60 url: 'https://developers.google.com/assistant/assistant_96.png',
61 width: 0,
62 },
63 },
64 },
65 {
66 name: 'ITEM_4',
67 synonyms: ['Item 4', 'Fourth item'],
68 display: {
69 title: 'Item #4',
70 description: 'Description of Item #4',
71 image: {
72 alt: 'Google Assistant logo',
73 height: 0,
74 url: 'https://developers.google.com/assistant/assistant_96.png',
75 width: 0,
76 },
77 },
78 },
79 ],
80 },
81 typeOverrideMode: 'TYPE_REPLACE',
82 },
83 ],
84 },
85 prompt: {
86 override: false,
87 content: {
88 collection: {
89 items: [
90 {
91 key: 'ITEM_1',
92 },
93 {
94 key: 'ITEM_2',
95 },
96 {
97 key: 'ITEM_3',
98 },
99 {
100 key: 'ITEM_4',
101 },
102 ],
103 subtitle: 'List subtitle',
104 title: 'List title',
105 },
106 },
107 firstSimple: {
108 speech: 'This is a list.',
109 text: 'This is a list.',
110 },
111 },
112 },
113 },
114 },
115 });
116
I created a HelpScene which pulls my options from my webhook.
In my slot filling, this is the configuration.
When I use the simulator, the options from my webhook are shown perfectly. But when I click an item in the list, the app just stops working. "YourApp is currently not responding".
At first I thought it had something to do with my webhook, so I changed the behaviour of the "on slot is filled" condition, that it should prompt something, directly from Google Actions Builder, but the behaviour is still not desired: the app just stops working.
Any ideas what I'm doing wrong?
Thanks in advance!
ANSWER
Answered 2022-Feb-23 at 15:32Okay, after days of searching, I finally figured it out.
It did have something to do with the Jovo framework/setup and/or the scene
parameter in the native response.
This is my component, in which I redirect new users to the HelpScene. This scene should show multiple cards in a list/collection/whatever on which the user can tap to receive more information about the application's features.
1@Handle(GoogleAssistantHandles.onScene('HelpScene'))
2 showHelpList() {
3 return this.$send({
4 platforms: {
5 googleAssistant: {
6 nativeResponse: {
7 scene: {
8 name: this.jovo.$googleAssistant?.$request.scene?.name,
9 slots: {},
10 next: {
11 name: 'MainScene',
12 },
13 },
14 session: {
15 id: 'session_id',
16 languageCode: 'nl-BE',
17 params: {},
18 typeOverrides: [
19 {
20 name: 'prompt_option',
21 synonym: {
22 entries: [
23 {
24 name: 'ITEM_1',
25 synonyms: ['Item 1', 'First item'],
26 display: {
27 title: 'Item #1',
28 description: 'Description of Item #1',
29 image: {
30 alt: 'Google Assistant logo',
31 height: 0,
32 url: 'https://developers.google.com/assistant/assistant_96.png',
33 width: 0,
34 },
35 },
36 },
37 {
38 name: 'ITEM_2',
39 synonyms: ['Item 2', 'Second item'],
40 display: {
41 title: 'Item #2',
42 description: 'Description of Item #2',
43 image: {
44 alt: 'Google Assistant logo',
45 height: 0,
46 url: 'https://developers.google.com/assistant/assistant_96.png',
47 width: 0,
48 },
49 },
50 },
51 {
52 name: 'ITEM_3',
53 synonyms: ['Item 3', 'Third item'],
54 display: {
55 title: 'Item #3',
56 description: 'Description of Item #3',
57 image: {
58 alt: 'Google Assistant logo',
59 height: 0,
60 url: 'https://developers.google.com/assistant/assistant_96.png',
61 width: 0,
62 },
63 },
64 },
65 {
66 name: 'ITEM_4',
67 synonyms: ['Item 4', 'Fourth item'],
68 display: {
69 title: 'Item #4',
70 description: 'Description of Item #4',
71 image: {
72 alt: 'Google Assistant logo',
73 height: 0,
74 url: 'https://developers.google.com/assistant/assistant_96.png',
75 width: 0,
76 },
77 },
78 },
79 ],
80 },
81 typeOverrideMode: 'TYPE_REPLACE',
82 },
83 ],
84 },
85 prompt: {
86 override: false,
87 content: {
88 collection: {
89 items: [
90 {
91 key: 'ITEM_1',
92 },
93 {
94 key: 'ITEM_2',
95 },
96 {
97 key: 'ITEM_3',
98 },
99 {
100 key: 'ITEM_4',
101 },
102 ],
103 subtitle: 'List subtitle',
104 title: 'List title',
105 },
106 },
107 firstSimple: {
108 speech: 'This is a list.',
109 text: 'This is a list.',
110 },
111 },
112 },
113 },
114 },
115 });
116@Component()
117export class WelcomeComponent extends BaseComponent {
118 async START(): Promise<void> {
119 const isNewUser = true;
120 if (isNewUser && this.$device.supports(Capability.Screen)) {
121 return this.$send(NextSceneOutput, {
122 name: 'HelpScene',
123 message: 'Hi, I noticed you are a new user, let me walk you through some options.',
124 });
125 }
126
127 return this.$send('Welcome!');
128 }
129
130 @Handle(GoogleAssistantHandles.onScene('HelpScene'))
131 help() {
132 const sessionData = this.$request.getSession();
133 if (sessionData && sessionData.prompt_option) {
134 return this.$send(NextSceneOutput, {
135 name: 'MainScene',
136 message: `You picked option ${sessionData.prompt_option}. This is some info about it ... What do you want to do now?`,
137 });
138 }
139
140 return this.$send({
141 platforms: {
142 googleAssistant: {
143 nativeResponse: {
144 session: {
145 id: 'session_id',
146 languageCode: '',
147 params: {},
148 typeOverrides: [
149 {
150 name: 'HelpOptionType',
151 typeOverrideMode: 'TYPE_REPLACE',
152 synonym: {
153 entries: [
154 {
155 name: 'ITEM_1',
156 synonyms: ['Item 1', 'First item'],
157 display: {
158 title: 'Item #1',
159 description: 'Description of Item #1',
160 image: {
161 alt: 'Google Assistant logo',
162 height: 0,
163 url: 'https://developers.google.com/assistant/assistant_96.png',
164 width: 0,
165 },
166 },
167 },
168 {
169 name: 'ITEM_2',
170 synonyms: ['Item 2', 'Second item'],
171 display: {
172 title: 'Item #2',
173 description: 'Description of Item #2',
174 image: {
175 alt: 'Google Assistant logo',
176 height: 0,
177 url: 'https://developers.google.com/assistant/assistant_96.png',
178 width: 0,
179 },
180 },
181 },
182 {
183 name: 'ITEM_3',
184 synonyms: ['Item 3', 'Third item'],
185 display: {
186 title: 'Item #3',
187 description: 'Description of Item #3',
188 image: {
189 alt: 'Google Assistant logo',
190 height: 0,
191 url: 'https://developers.google.com/assistant/assistant_96.png',
192 width: 0,
193 },
194 },
195 },
196 {
197 name: 'ITEM_4',
198 synonyms: ['Item 4', 'Fourth item'],
199 display: {
200 title: 'Item #4',
201 description: 'Description of Item #4',
202 image: {
203 alt: 'Google Assistant logo',
204 height: 0,
205 url: 'https://developers.google.com/assistant/assistant_96.png',
206 width: 0,
207 },
208 },
209 },
210 ],
211 },
212 },
213 ],
214 },
215 prompt: {
216 override: false,
217 content: {
218 list: {
219 items: [
220 {
221 key: 'ITEM_1',
222 },
223 {
224 key: 'ITEM_2',
225 },
226 {
227 key: 'ITEM_3',
228 },
229 {
230 key: 'ITEM_4',
231 },
232 ],
233 subtitle: 'List subtitle',
234 title: 'List title',
235 },
236 },
237 firstSimple: {
238 speech: 'This is a list.',
239 text: 'This is a list.',
240 },
241 },
242 },
243 },
244 },
245 });
246 }
247
248 // ...other intents...
249
In AoG I made 2 scenes, 1 MainScene on which a user enters the app and one HelpScene, which looks like this (yaml config). The goal of the HelpScene is only to be used for slot filling on the different options, then a user should go back to the MainScene.
1@Handle(GoogleAssistantHandles.onScene('HelpScene'))
2 showHelpList() {
3 return this.$send({
4 platforms: {
5 googleAssistant: {
6 nativeResponse: {
7 scene: {
8 name: this.jovo.$googleAssistant?.$request.scene?.name,
9 slots: {},
10 next: {
11 name: 'MainScene',
12 },
13 },
14 session: {
15 id: 'session_id',
16 languageCode: 'nl-BE',
17 params: {},
18 typeOverrides: [
19 {
20 name: 'prompt_option',
21 synonym: {
22 entries: [
23 {
24 name: 'ITEM_1',
25 synonyms: ['Item 1', 'First item'],
26 display: {
27 title: 'Item #1',
28 description: 'Description of Item #1',
29 image: {
30 alt: 'Google Assistant logo',
31 height: 0,
32 url: 'https://developers.google.com/assistant/assistant_96.png',
33 width: 0,
34 },
35 },
36 },
37 {
38 name: 'ITEM_2',
39 synonyms: ['Item 2', 'Second item'],
40 display: {
41 title: 'Item #2',
42 description: 'Description of Item #2',
43 image: {
44 alt: 'Google Assistant logo',
45 height: 0,
46 url: 'https://developers.google.com/assistant/assistant_96.png',
47 width: 0,
48 },
49 },
50 },
51 {
52 name: 'ITEM_3',
53 synonyms: ['Item 3', 'Third item'],
54 display: {
55 title: 'Item #3',
56 description: 'Description of Item #3',
57 image: {
58 alt: 'Google Assistant logo',
59 height: 0,
60 url: 'https://developers.google.com/assistant/assistant_96.png',
61 width: 0,
62 },
63 },
64 },
65 {
66 name: 'ITEM_4',
67 synonyms: ['Item 4', 'Fourth item'],
68 display: {
69 title: 'Item #4',
70 description: 'Description of Item #4',
71 image: {
72 alt: 'Google Assistant logo',
73 height: 0,
74 url: 'https://developers.google.com/assistant/assistant_96.png',
75 width: 0,
76 },
77 },
78 },
79 ],
80 },
81 typeOverrideMode: 'TYPE_REPLACE',
82 },
83 ],
84 },
85 prompt: {
86 override: false,
87 content: {
88 collection: {
89 items: [
90 {
91 key: 'ITEM_1',
92 },
93 {
94 key: 'ITEM_2',
95 },
96 {
97 key: 'ITEM_3',
98 },
99 {
100 key: 'ITEM_4',
101 },
102 ],
103 subtitle: 'List subtitle',
104 title: 'List title',
105 },
106 },
107 firstSimple: {
108 speech: 'This is a list.',
109 text: 'This is a list.',
110 },
111 },
112 },
113 },
114 },
115 });
116@Component()
117export class WelcomeComponent extends BaseComponent {
118 async START(): Promise<void> {
119 const isNewUser = true;
120 if (isNewUser && this.$device.supports(Capability.Screen)) {
121 return this.$send(NextSceneOutput, {
122 name: 'HelpScene',
123 message: 'Hi, I noticed you are a new user, let me walk you through some options.',
124 });
125 }
126
127 return this.$send('Welcome!');
128 }
129
130 @Handle(GoogleAssistantHandles.onScene('HelpScene'))
131 help() {
132 const sessionData = this.$request.getSession();
133 if (sessionData && sessionData.prompt_option) {
134 return this.$send(NextSceneOutput, {
135 name: 'MainScene',
136 message: `You picked option ${sessionData.prompt_option}. This is some info about it ... What do you want to do now?`,
137 });
138 }
139
140 return this.$send({
141 platforms: {
142 googleAssistant: {
143 nativeResponse: {
144 session: {
145 id: 'session_id',
146 languageCode: '',
147 params: {},
148 typeOverrides: [
149 {
150 name: 'HelpOptionType',
151 typeOverrideMode: 'TYPE_REPLACE',
152 synonym: {
153 entries: [
154 {
155 name: 'ITEM_1',
156 synonyms: ['Item 1', 'First item'],
157 display: {
158 title: 'Item #1',
159 description: 'Description of Item #1',
160 image: {
161 alt: 'Google Assistant logo',
162 height: 0,
163 url: 'https://developers.google.com/assistant/assistant_96.png',
164 width: 0,
165 },
166 },
167 },
168 {
169 name: 'ITEM_2',
170 synonyms: ['Item 2', 'Second item'],
171 display: {
172 title: 'Item #2',
173 description: 'Description of Item #2',
174 image: {
175 alt: 'Google Assistant logo',
176 height: 0,
177 url: 'https://developers.google.com/assistant/assistant_96.png',
178 width: 0,
179 },
180 },
181 },
182 {
183 name: 'ITEM_3',
184 synonyms: ['Item 3', 'Third item'],
185 display: {
186 title: 'Item #3',
187 description: 'Description of Item #3',
188 image: {
189 alt: 'Google Assistant logo',
190 height: 0,
191 url: 'https://developers.google.com/assistant/assistant_96.png',
192 width: 0,
193 },
194 },
195 },
196 {
197 name: 'ITEM_4',
198 synonyms: ['Item 4', 'Fourth item'],
199 display: {
200 title: 'Item #4',
201 description: 'Description of Item #4',
202 image: {
203 alt: 'Google Assistant logo',
204 height: 0,
205 url: 'https://developers.google.com/assistant/assistant_96.png',
206 width: 0,
207 },
208 },
209 },
210 ],
211 },
212 },
213 ],
214 },
215 prompt: {
216 override: false,
217 content: {
218 list: {
219 items: [
220 {
221 key: 'ITEM_1',
222 },
223 {
224 key: 'ITEM_2',
225 },
226 {
227 key: 'ITEM_3',
228 },
229 {
230 key: 'ITEM_4',
231 },
232 ],
233 subtitle: 'List subtitle',
234 title: 'List title',
235 },
236 },
237 firstSimple: {
238 speech: 'This is a list.',
239 text: 'This is a list.',
240 },
241 },
242 },
243 },
244 },
245 });
246 }
247
248 // ...other intents...
249"conditionalEvents":
250 - "condition": "scene.slots.status == \"FINAL\""
251 "handler":
252 "webhookHandler": "Jovo"
253"slots":
254 - "commitBehavior":
255 "writeSessionParam": "prompt_option"
256 "name": "prompt_option"
257 "promptSettings":
258 "initialPrompt":
259 "webhookHandler": "Jovo"
260 "required": true
261 "type":
262 "name": "HelpOptionType"
263
As you can see in my help()
method, I just check if the session param is filled out. If it is, I redirect the user to the MainScene, but first give a response about the chosen option.
QUESTION
How to use muti-language in 'gTTS' for single input line?
Asked 2022-Jan-29 at 07:05I want to convert text to speech from a document where multiple languages are included. When I am trying to do the following code, I fetch problems to record each language clearly. How can I save such type mixer text-audio clearly?
1from gtts import gTTS
2mytext = 'Welcome to gtts! আজ একটি ভাল দিন। tumi kemon acho? ٱلْحَمْدُ لِلَّٰهِ'
3language = 'ar' # arabic
4myobj = gTTS(text=mytext, tld='co.in', lang=language, slow=False)
5myobj.save("audio.mp3")
6
ANSWER
Answered 2022-Jan-29 at 07:05It's not enough to use just text to speech, since it can work with one language only.
To solve this problem we need to detect language for each part of the sentence.
Then run it through text to speech and append it to our final spoken sentence.
It would be ideal to use some neural network (there are plenty) to do this categorization for You.
Just for a sake of proof of concept I used googletrans
to detect language for each part of the sentences and gtts
to make a mp3 file from it.
It's not bullet proof, especially with arabic text. googletrans
somehow detect different language code, which is not recognized by gtts
. For that reason we have to use code_table to pick proper language code that works with gtts.
Here is working example:
1from gtts import gTTS
2mytext = 'Welcome to gtts! আজ একটি ভাল দিন। tumi kemon acho? ٱلْحَمْدُ لِلَّٰهِ'
3language = 'ar' # arabic
4myobj = gTTS(text=mytext, tld='co.in', lang=language, slow=False)
5myobj.save("audio.mp3")
6from googletrans import Translator
7from gtts import gTTS
8
9input_text = "Welcome to gtts! আজ একটি ভাল দিন। tumi kemon acho? ٱلْحَمْدُ لِلَّٰه"
10words = input_text.split(" ")
11translator = Translator()
12language, sentence = None, ""
13
14lang_code_table = {"sd": "ar"}
15
16with open('output.mp3', 'wb') as ff:
17 for word in words:
18 if word == " ":
19 continue
20 # Detect language of current word
21 word_language = translator.detect(word).lang
22
23 if word_language == language:
24 # Same language, append word to the sentence
25 sentence += " " + word
26 else:
27 if language is None:
28 # No language set yet, initialize and continue
29 language, sentence = word_language, word
30 continue
31
32 if word.endswith(("?", ".", "!")):
33 # If word endswith one of the punctuation marks, it should be part of previous sentence
34 sentence += " " + word
35 continue
36
37 # We have whole previous sentence, translate it into speech and append to mp3 file
38 gTTS(text=sentence, lang=lang_code_table.get(language, language), slow=False).write_to_fp(ff)
39
40 # Continue with other language
41 language, sentence = word_language, word
42
43 if language and sentence:
44 # Append last detected sentence
45 gTTS(text=sentence, lang=lang_code_table.get(language, language), slow=False).write_to_fp(ff)
46
It's obviously not fast and won't fit for longer text.
Also it needs better tokenizer and proper error handling.
Again, it's just proof of concept.
QUESTION
Assigning True/False if a token is present in a data-frame
Asked 2022-Jan-06 at 12:38My current data-frame is:
1 |articleID | keywords |
2 |:-------- |:------------------------------------------------------:|
30 |58b61d1d | ['Second Avenue (Manhattan, NY)'] |
41 |58b6393b | ['Crossword Puzzles'] |
52 |58b6556e | ['Workplace Hazards and Violations', 'Trump, Donald J']|
63 |58b657fa | ['Trump, Donald J', 'Speeches and Statements']. |
7
I want a data-frame similar to the following, where a column is added based on whether a Trump token, 'Trump, Donald J' is mentioned in the keywords and if so then it is assigned True :
1 |articleID | keywords |
2 |:-------- |:------------------------------------------------------:|
30 |58b61d1d | ['Second Avenue (Manhattan, NY)'] |
41 |58b6393b | ['Crossword Puzzles'] |
52 |58b6556e | ['Workplace Hazards and Violations', 'Trump, Donald J']|
63 |58b657fa | ['Trump, Donald J', 'Speeches and Statements']. |
7 |articleID | keywords | trumpMention |
8 |:-------- |:------------------------------------------------------:| ------------:|
90 |58b61d1d | ['Second Avenue (Manhattan, NY)'] | False |
101 |58b6393b | ['Crossword Puzzles'] | False |
112 |58b6556e | ['Workplace Hazards and Violations', 'Trump, Donald J']| True |
123 |58b657fa | ['Trump, Donald J', 'Speeches and Statements']. | True |
13
I have tried multiple ways using df functions. But cannot achieve my wanted results. Some of the ways I've tried are:
1 |articleID | keywords |
2 |:-------- |:------------------------------------------------------:|
30 |58b61d1d | ['Second Avenue (Manhattan, NY)'] |
41 |58b6393b | ['Crossword Puzzles'] |
52 |58b6556e | ['Workplace Hazards and Violations', 'Trump, Donald J']|
63 |58b657fa | ['Trump, Donald J', 'Speeches and Statements']. |
7 |articleID | keywords | trumpMention |
8 |:-------- |:------------------------------------------------------:| ------------:|
90 |58b61d1d | ['Second Avenue (Manhattan, NY)'] | False |
101 |58b6393b | ['Crossword Puzzles'] | False |
112 |58b6556e | ['Workplace Hazards and Violations', 'Trump, Donald J']| True |
123 |58b657fa | ['Trump, Donald J', 'Speeches and Statements']. | True |
13df['trumpMention'] = np.where(any(df['keywords']) == 'Trump, Donald J', True, False)
14
or
1 |articleID | keywords |
2 |:-------- |:------------------------------------------------------:|
30 |58b61d1d | ['Second Avenue (Manhattan, NY)'] |
41 |58b6393b | ['Crossword Puzzles'] |
52 |58b6556e | ['Workplace Hazards and Violations', 'Trump, Donald J']|
63 |58b657fa | ['Trump, Donald J', 'Speeches and Statements']. |
7 |articleID | keywords | trumpMention |
8 |:-------- |:------------------------------------------------------:| ------------:|
90 |58b61d1d | ['Second Avenue (Manhattan, NY)'] | False |
101 |58b6393b | ['Crossword Puzzles'] | False |
112 |58b6556e | ['Workplace Hazards and Violations', 'Trump, Donald J']| True |
123 |58b657fa | ['Trump, Donald J', 'Speeches and Statements']. | True |
13df['trumpMention'] = np.where(any(df['keywords']) == 'Trump, Donald J', True, False)
14df['trumpMention'] = df['keywords'].apply(lambda x: any(token == 'Trump, Donald J') for token in x)
15
or
1 |articleID | keywords |
2 |:-------- |:------------------------------------------------------:|
30 |58b61d1d | ['Second Avenue (Manhattan, NY)'] |
41 |58b6393b | ['Crossword Puzzles'] |
52 |58b6556e | ['Workplace Hazards and Violations', 'Trump, Donald J']|
63 |58b657fa | ['Trump, Donald J', 'Speeches and Statements']. |
7 |articleID | keywords | trumpMention |
8 |:-------- |:------------------------------------------------------:| ------------:|
90 |58b61d1d | ['Second Avenue (Manhattan, NY)'] | False |
101 |58b6393b | ['Crossword Puzzles'] | False |
112 |58b6556e | ['Workplace Hazards and Violations', 'Trump, Donald J']| True |
123 |58b657fa | ['Trump, Donald J', 'Speeches and Statements']. | True |
13df['trumpMention'] = np.where(any(df['keywords']) == 'Trump, Donald J', True, False)
14df['trumpMention'] = df['keywords'].apply(lambda x: any(token == 'Trump, Donald J') for token in x)
15lst = ['Trump, Donald J']
16df['trumpMention'] = df['keywords'].apply(lambda x: ([ True for token in x if any(token in lst)]))
17
Raw input:
1 |articleID | keywords |
2 |:-------- |:------------------------------------------------------:|
30 |58b61d1d | ['Second Avenue (Manhattan, NY)'] |
41 |58b6393b | ['Crossword Puzzles'] |
52 |58b6556e | ['Workplace Hazards and Violations', 'Trump, Donald J']|
63 |58b657fa | ['Trump, Donald J', 'Speeches and Statements']. |
7 |articleID | keywords | trumpMention |
8 |:-------- |:------------------------------------------------------:| ------------:|
90 |58b61d1d | ['Second Avenue (Manhattan, NY)'] | False |
101 |58b6393b | ['Crossword Puzzles'] | False |
112 |58b6556e | ['Workplace Hazards and Violations', 'Trump, Donald J']| True |
123 |58b657fa | ['Trump, Donald J', 'Speeches and Statements']. | True |
13df['trumpMention'] = np.where(any(df['keywords']) == 'Trump, Donald J', True, False)
14df['trumpMention'] = df['keywords'].apply(lambda x: any(token == 'Trump, Donald J') for token in x)
15lst = ['Trump, Donald J']
16df['trumpMention'] = df['keywords'].apply(lambda x: ([ True for token in x if any(token in lst)]))
17df = pd.DataFrame({'articleID': ['58b61d1d', '58b6393b', '58b6556e', '58b657fa'],
18 'keywords': [['Second Avenue (Manhattan, NY)'],
19 ['Crossword Puzzles'],
20 ['Workplace Hazards and Violations', 'Trump, Donald J'],
21 ['Trump, Donald J', 'Speeches and Statements']],
22 'trumpMention': [False, False, True, True]})
23
ANSWER
Answered 2022-Jan-06 at 12:13try
1 |articleID | keywords |
2 |:-------- |:------------------------------------------------------:|
30 |58b61d1d | ['Second Avenue (Manhattan, NY)'] |
41 |58b6393b | ['Crossword Puzzles'] |
52 |58b6556e | ['Workplace Hazards and Violations', 'Trump, Donald J']|
63 |58b657fa | ['Trump, Donald J', 'Speeches and Statements']. |
7 |articleID | keywords | trumpMention |
8 |:-------- |:------------------------------------------------------:| ------------:|
90 |58b61d1d | ['Second Avenue (Manhattan, NY)'] | False |
101 |58b6393b | ['Crossword Puzzles'] | False |
112 |58b6556e | ['Workplace Hazards and Violations', 'Trump, Donald J']| True |
123 |58b657fa | ['Trump, Donald J', 'Speeches and Statements']. | True |
13df['trumpMention'] = np.where(any(df['keywords']) == 'Trump, Donald J', True, False)
14df['trumpMention'] = df['keywords'].apply(lambda x: any(token == 'Trump, Donald J') for token in x)
15lst = ['Trump, Donald J']
16df['trumpMention'] = df['keywords'].apply(lambda x: ([ True for token in x if any(token in lst)]))
17df = pd.DataFrame({'articleID': ['58b61d1d', '58b6393b', '58b6556e', '58b657fa'],
18 'keywords': [['Second Avenue (Manhattan, NY)'],
19 ['Crossword Puzzles'],
20 ['Workplace Hazards and Violations', 'Trump, Donald J'],
21 ['Trump, Donald J', 'Speeches and Statements']],
22 'trumpMention': [False, False, True, True]})
23df["trumpMention"] = df["keywords"].apply(lambda x: "Trump, Donald J" in x)
24
QUESTION
speechSynthesis.getVoices (Web Speech API) doesn't show some of the locally installed voices
Asked 2021-Dec-31 at 08:19I'm trying to use Web Speech API to read text on my web page. But I found that some of the SAPI5 voices installed in my Windows 10 would not show up in the output of speechSynthesis.getVoices()
, including the Microsoft Eva Mobile
on Windows 10 "unlock"ed by importing a registry file. These voices could work fine in local TTS programs like Balabolka
but they just don't show in the browser. Are there any specific rules by which the browser chooses whether to list the voices or not?
ANSWER
Answered 2021-Dec-31 at 08:19OK, I found out what was wrong. I was using Microsoft Edge and it seems that Edge only shows some of Microsoft voices. If I use Firefox, the other installed voices will also show up. So it was Edge's fault.
QUESTION
Combining Object Detection with Text to Speech Code
Asked 2021-Dec-28 at 16:46I am trying to write an object detection + text-to-speech code to detect objects and produce a voice output on the raspberry pi 4. However, as of right now, I am trying to write a simple python script that incorporates both elements into a single .py file and preferably as a function. I will then run this script on the raspberry pi. I want to give credit to Murtaza's Workshop "Object Detection OpenCV Python | Easy and Fast (2020)" and https://pypi.org/project/pyttsx3/ for the Text to speech documentation for pyttsx3. I have attached the code below. I have tried running the program and I always keep getting errors with the Text to speech code (commented lines 33-36 for reference). I believe it is some looping error but I just can't seem to get the program to run continuously. For instance, if I run the code without the TTS part, it works fine. Otherwise, it runs for perhaps 3-5 seconds and suddenly stops. I am a beginner but highly passionate in computer vision, and any help is appreciated!
1import cv2
2#import pyttsx3
3
4cap = cv2.VideoCapture(0)
5cap.set(3, 640)
6cap.set(4, 480)
7
8classNames = []
9classFile = 'coco.names'
10with open(classFile,'rt') as f:
11 classNames = [line.rstrip() for line in f]
12
13configPath = 'ssd_mobilenet_v3_large_coco_2020_01_14.pbtxt'
14weightsPath = 'frozen_inference_graph.pb'
15
16net = cv2.dnn_DetectionModel(weightsPath, configPath)
17net.setInputSize(320, 320)
18net.setInputScale(1.0 / 127.5)
19net.setInputMean((127.5, 127.5, 127.5))
20net.setInputSwapRB(True)
21
22while True:
23 success, img = cap.read()
24 classIds, confs, bbox = net.detect(img, confThreshold=0.45)
25 if len(classIds) != 0:
26 for classId, confidence, box in zip(classIds.flatten(), confs.flatten(), bbox):
27 className = classNames[classId-1]
28 #engine = pyttsx3.init()
29 #str1 = str(className)
30 #engine.say(str1 + "detected")
31 #engine.runAndWait()
32 cv2.rectangle(img, box, color=(0, 255, 0), thickness=2)
33 cv2.putText(img, classNames[classId-1].upper(), (box[0]+10, box[1]+30),
34 cv2.FONT_HERSHEY_COMPLEX, 1, (0, 255, 0), 2)
35 cv2.putText(img, str(round(confidence * 100, 2)), (box[0]+200, box[1]+30),
36 cv2.FONT_HERSHEY_COMPLEX, 1, (0, 255, 0), 2)
37 cv2.imshow('Output', img)
38 cv2.waitKey(1)
39
Here is a screenshot of my code 1
Here is a link to the download files needed to run code as well in case
Here is the error: /Users/venuchannarayappa/PycharmProjects/ObjectDetector/venv/bin/python /Users/venuchannarayappa/PycharmProjects/ObjectDetector/main.py
Traceback (most recent call last): File "/Users/venuchannarayappa/PycharmProjects/ObjectDetector/main.py", line 24, in
classIds, confs, bbox = net.detect(img, confThreshold=0.45)
cv2.error: OpenCV(4.5.4) /Users/runner/work/opencv-python/opencv-python/opencv/modules/imgproc/src/resize.cpp:4051: error: (-215:Assertion failed) !ssize.empty() in function 'resize'
Process finished with exit code 1
Link to video output recorded through iphone: https://www.icloud.com/iclouddrive/03jGfqy7-A9DKfekcu3wjk0rA#IMG_4932
Sorry for such a long post! I was debugging my code for the past few hours and I think I got it to work. I changed the main while loop only and rest of code is the same. The program seems to run continuously for me. I would appreciate any comments if there are any difficulties in running it.
1import cv2
2#import pyttsx3
3
4cap = cv2.VideoCapture(0)
5cap.set(3, 640)
6cap.set(4, 480)
7
8classNames = []
9classFile = 'coco.names'
10with open(classFile,'rt') as f:
11 classNames = [line.rstrip() for line in f]
12
13configPath = 'ssd_mobilenet_v3_large_coco_2020_01_14.pbtxt'
14weightsPath = 'frozen_inference_graph.pb'
15
16net = cv2.dnn_DetectionModel(weightsPath, configPath)
17net.setInputSize(320, 320)
18net.setInputScale(1.0 / 127.5)
19net.setInputMean((127.5, 127.5, 127.5))
20net.setInputSwapRB(True)
21
22while True:
23 success, img = cap.read()
24 classIds, confs, bbox = net.detect(img, confThreshold=0.45)
25 if len(classIds) != 0:
26 for classId, confidence, box in zip(classIds.flatten(), confs.flatten(), bbox):
27 className = classNames[classId-1]
28 #engine = pyttsx3.init()
29 #str1 = str(className)
30 #engine.say(str1 + "detected")
31 #engine.runAndWait()
32 cv2.rectangle(img, box, color=(0, 255, 0), thickness=2)
33 cv2.putText(img, classNames[classId-1].upper(), (box[0]+10, box[1]+30),
34 cv2.FONT_HERSHEY_COMPLEX, 1, (0, 255, 0), 2)
35 cv2.putText(img, str(round(confidence * 100, 2)), (box[0]+200, box[1]+30),
36 cv2.FONT_HERSHEY_COMPLEX, 1, (0, 255, 0), 2)
37 cv2.imshow('Output', img)
38 cv2.waitKey(1)
39engine = pyttsx3.init()
40while True:
41 success, img = cap.read()
42 #print(success)
43 #print(img)
44 #print(img.shape)
45 classIds, confs, bbox = net.detect(img, confThreshold=0.45)
46 if len(classIds) != 0:
47 for classId, confidence, box in zip(classIds.flatten(), confs.flatten(), bbox):
48 className = classNames[classId - 1]
49 #print(len(classIds))
50 str1 = str(className)
51 #print(str1)
52 engine.say(str1 + "detected")
53 engine.runAndWait()
54 cv2.rectangle(img, box, color=(0, 255, 0), thickness=2)
55 cv2.putText(img, classNames[classId-1].upper(), (box[0]+10, box[1]+30),
56 cv2.FONT_HERSHEY_COMPLEX, 1, (0, 255, 0), 2)
57 cv2.putText(img, str(round(confidence * 100, 2)), (box[0]+200, box[1]+30),
58 cv2.FONT_HERSHEY_COMPLEX, 1, (0, 255, 0), 2)
59 continue
60 cv2.imshow('Output', img)
61 cv2.waitKey(1)
62
I am planning to run this code on the raspberry pi. I am planning on installing opencv using this command: pip3 install opencv-python. However, I am not sure how to install pyttsx3 since I think I need to install from source. Please let me know if there is a simple method to install pyttsx3.
Update: As of December 27th, I have installed all necessary packages and my code is now functional.
ANSWER
Answered 2021-Dec-28 at 16:46I installed pyttsx3 using the two commands in the terminal on the Raspberry Pi:
- sudo apt update && sudo apt install espeak ffmpeg libespeak1
- pip install pyttsx3
I followed the video youtube.com/watch?v=AWhDDl-7Iis&ab_channel=AiPhile to install pyttsx3. My functional code should also be listed above. My question should be resolved but hopefully useful to anyone looking to write a similar program. I have made minor tweaks to my code.
QUESTION
Yielding values from consecutive parallel parse functions via meta in Scrapy
Asked 2021-Dec-20 at 07:53In my scrapy code I'm trying to yield the following figures from parliament's website where all the members of parliament (MPs) are listed. Opening the links for each MP, I'm making parallel requests to get the figures I'm trying to count. I'm intending to yield each three figures below in the company of the name and the party of the MP
Here are the figures I'm trying to scrape
- How many bill proposals that each MP has their signature on
- How many question proposals that each MP has their signature on
- How many times that each MP spoke on the parliament
In order to count and yield out how many bills has each member of parliament has their signature on, I'm trying to write a scraper on the members of parliament which works with 3 layers:
- Starting with the link where all MPs are listed
- From (1) accessing the individual page of each MP where the three information defined above is displayed
- 3a) Requesting the page with bill proposals and counting the number of them by len function 3b) Requesting the page with question proposals and counting the number of them by len function 3c) Requesting the page with speeches and counting the number of them by len function
What I want: I want to yield the inquiries of 3a,3b,3c with the name and the party of the MP in the same raw
Problem 1) When I get an output to csv it only creates fields of speech count, name, part. It doesn't show me the fields of bill proposals and question proposals
Problem 2) There are two empty values for each MP, which I guess corresponds to the values I described above at Problem1
Problem 3) What is the better way of restructuring my code to output the three values in the same line, rather than printing each MP three times for each value that I'm scraping
1from scrapy import Spider
2from scrapy.http import Request
3
4import logging
5
6
7class MvSpider(Spider):
8 name = 'mv2'
9 allowed_domains = ['tbmm.gov.tr']
10 start_urls = ['https://www.tbmm.gov.tr/Milletvekilleri/liste']
11
12 def parse(self, response):
13 mv_list = mv_list = response.xpath("//ul[@class='list-group list-group-flush']") #taking all MPs listed
14
15 for mv in mv_list:
16 name = mv.xpath("./li/div/div/a/text()").get() # MP's name taken
17 party = mv.xpath("./li/div/div[@class='col-md-4 text-right']/text()").get().strip() #MP's party name taken
18 partial_link = mv.xpath('.//div[@class="col-md-8"]/a/@href').get()
19 full_link = response.urljoin(partial_link)
20
21 yield Request(full_link, callback = self.mv_analysis, meta = {
22 'name': name,
23 'party': party
24 })
25
26
27 def mv_analysis(self, response):
28 name = response.meta.get('name')
29 party = response.meta.get('party')
30
31 billprop_link_path = response.xpath(".//a[contains(text(),'İmzası Bulunan Kanun Teklifleri')]/@href").get()
32 billprop_link = response.urljoin(billprop_link_path)
33
34 questionprop_link_path = response.xpath(".//a[contains(text(),'Sahibi Olduğu Yazılı Soru Önergeleri')]/@href").get()
35 questionprop_link = response.urljoin(questionprop_link_path)
36
37 speech_link_path = response.xpath(".//a[contains(text(),'Genel Kurul Konuşmaları')]/@href").get()
38 speech_link = response.urljoin(speech_link_path)
39
40 yield Request(billprop_link, callback = self.bill_prop_counter, meta = {
41 'name': name,
42 'party': party
43 }) #number of bill proposals to be requested
44
45 yield Request(questionprop_link, callback = self.quest_prop_counter, meta = {
46 'name': name,
47 'party': party
48 }) #number of question propoesals to be requested
49
50
51 yield Request(speech_link, callback = self.speech_counter, meta = {
52 'name': name,
53 'party': party
54 }) #number of speeches to be requested
55
56
57
58
59# COUNTING FUNCTIONS
60
61
62 def bill_prop_counter(self,response):
63
64 name = response.meta.get('name')
65 party = response.meta.get('party')
66
67 billproposals = response.xpath("//tr[@valign='TOP']")
68
69 yield { 'bill_prop_count': len(billproposals),
70 'name': name,
71 'party': party}
72
73 def quest_prop_counter(self, response):
74
75 name = response.meta.get('name')
76 party = response.meta.get('party')
77
78 researchproposals = response.xpath("//tr[@valign='TOP']")
79
80 yield {'res_prop_count': len(researchproposals),
81 'name': name,
82 'party': party}
83
84 def speech_counter(self, response):
85
86 name = response.meta.get('name')
87 party = response.meta.get('party')
88
89 speeches = response.xpath("//tr[@valign='TOP']")
90
91 yield { 'speech_count' : len(speeches),
92 'name': name,
93 'party': party}
94
ANSWER
Answered 2021-Dec-18 at 06:26This is happening because you are yielding dicts instead of item objects, so spider engine will not have a guide of fields you want to have as default.
In order to make the csv output fields bill_prop_count
and res_prop_count
, you should make the following changes in your code:
1 - Create a base item object with all desirable fields - you can create this in the items.py
file of your scrapy project:
1from scrapy import Spider
2from scrapy.http import Request
3
4import logging
5
6
7class MvSpider(Spider):
8 name = 'mv2'
9 allowed_domains = ['tbmm.gov.tr']
10 start_urls = ['https://www.tbmm.gov.tr/Milletvekilleri/liste']
11
12 def parse(self, response):
13 mv_list = mv_list = response.xpath("//ul[@class='list-group list-group-flush']") #taking all MPs listed
14
15 for mv in mv_list:
16 name = mv.xpath("./li/div/div/a/text()").get() # MP's name taken
17 party = mv.xpath("./li/div/div[@class='col-md-4 text-right']/text()").get().strip() #MP's party name taken
18 partial_link = mv.xpath('.//div[@class="col-md-8"]/a/@href').get()
19 full_link = response.urljoin(partial_link)
20
21 yield Request(full_link, callback = self.mv_analysis, meta = {
22 'name': name,
23 'party': party
24 })
25
26
27 def mv_analysis(self, response):
28 name = response.meta.get('name')
29 party = response.meta.get('party')
30
31 billprop_link_path = response.xpath(".//a[contains(text(),'İmzası Bulunan Kanun Teklifleri')]/@href").get()
32 billprop_link = response.urljoin(billprop_link_path)
33
34 questionprop_link_path = response.xpath(".//a[contains(text(),'Sahibi Olduğu Yazılı Soru Önergeleri')]/@href").get()
35 questionprop_link = response.urljoin(questionprop_link_path)
36
37 speech_link_path = response.xpath(".//a[contains(text(),'Genel Kurul Konuşmaları')]/@href").get()
38 speech_link = response.urljoin(speech_link_path)
39
40 yield Request(billprop_link, callback = self.bill_prop_counter, meta = {
41 'name': name,
42 'party': party
43 }) #number of bill proposals to be requested
44
45 yield Request(questionprop_link, callback = self.quest_prop_counter, meta = {
46 'name': name,
47 'party': party
48 }) #number of question propoesals to be requested
49
50
51 yield Request(speech_link, callback = self.speech_counter, meta = {
52 'name': name,
53 'party': party
54 }) #number of speeches to be requested
55
56
57
58
59# COUNTING FUNCTIONS
60
61
62 def bill_prop_counter(self,response):
63
64 name = response.meta.get('name')
65 party = response.meta.get('party')
66
67 billproposals = response.xpath("//tr[@valign='TOP']")
68
69 yield { 'bill_prop_count': len(billproposals),
70 'name': name,
71 'party': party}
72
73 def quest_prop_counter(self, response):
74
75 name = response.meta.get('name')
76 party = response.meta.get('party')
77
78 researchproposals = response.xpath("//tr[@valign='TOP']")
79
80 yield {'res_prop_count': len(researchproposals),
81 'name': name,
82 'party': party}
83
84 def speech_counter(self, response):
85
86 name = response.meta.get('name')
87 party = response.meta.get('party')
88
89 speeches = response.xpath("//tr[@valign='TOP']")
90
91 yield { 'speech_count' : len(speeches),
92 'name': name,
93 'party': party}
94from scrapy import Item, Field
95
96
97class MvItem(Item):
98 name = Field()
99 party = Field()
100 bill_prop_count = Field()
101 res_prop_count = Field()
102 speech_count = Field()
103
2 - Import the item object created to the spider code & yield items populated with the dict, instead of single dicts:
1from scrapy import Spider
2from scrapy.http import Request
3
4import logging
5
6
7class MvSpider(Spider):
8 name = 'mv2'
9 allowed_domains = ['tbmm.gov.tr']
10 start_urls = ['https://www.tbmm.gov.tr/Milletvekilleri/liste']
11
12 def parse(self, response):
13 mv_list = mv_list = response.xpath("//ul[@class='list-group list-group-flush']") #taking all MPs listed
14
15 for mv in mv_list:
16 name = mv.xpath("./li/div/div/a/text()").get() # MP's name taken
17 party = mv.xpath("./li/div/div[@class='col-md-4 text-right']/text()").get().strip() #MP's party name taken
18 partial_link = mv.xpath('.//div[@class="col-md-8"]/a/@href').get()
19 full_link = response.urljoin(partial_link)
20
21 yield Request(full_link, callback = self.mv_analysis, meta = {
22 'name': name,
23 'party': party
24 })
25
26
27 def mv_analysis(self, response):
28 name = response.meta.get('name')
29 party = response.meta.get('party')
30
31 billprop_link_path = response.xpath(".//a[contains(text(),'İmzası Bulunan Kanun Teklifleri')]/@href").get()
32 billprop_link = response.urljoin(billprop_link_path)
33
34 questionprop_link_path = response.xpath(".//a[contains(text(),'Sahibi Olduğu Yazılı Soru Önergeleri')]/@href").get()
35 questionprop_link = response.urljoin(questionprop_link_path)
36
37 speech_link_path = response.xpath(".//a[contains(text(),'Genel Kurul Konuşmaları')]/@href").get()
38 speech_link = response.urljoin(speech_link_path)
39
40 yield Request(billprop_link, callback = self.bill_prop_counter, meta = {
41 'name': name,
42 'party': party
43 }) #number of bill proposals to be requested
44
45 yield Request(questionprop_link, callback = self.quest_prop_counter, meta = {
46 'name': name,
47 'party': party
48 }) #number of question propoesals to be requested
49
50
51 yield Request(speech_link, callback = self.speech_counter, meta = {
52 'name': name,
53 'party': party
54 }) #number of speeches to be requested
55
56
57
58
59# COUNTING FUNCTIONS
60
61
62 def bill_prop_counter(self,response):
63
64 name = response.meta.get('name')
65 party = response.meta.get('party')
66
67 billproposals = response.xpath("//tr[@valign='TOP']")
68
69 yield { 'bill_prop_count': len(billproposals),
70 'name': name,
71 'party': party}
72
73 def quest_prop_counter(self, response):
74
75 name = response.meta.get('name')
76 party = response.meta.get('party')
77
78 researchproposals = response.xpath("//tr[@valign='TOP']")
79
80 yield {'res_prop_count': len(researchproposals),
81 'name': name,
82 'party': party}
83
84 def speech_counter(self, response):
85
86 name = response.meta.get('name')
87 party = response.meta.get('party')
88
89 speeches = response.xpath("//tr[@valign='TOP']")
90
91 yield { 'speech_count' : len(speeches),
92 'name': name,
93 'party': party}
94from scrapy import Item, Field
95
96
97class MvItem(Item):
98 name = Field()
99 party = Field()
100 bill_prop_count = Field()
101 res_prop_count = Field()
102 speech_count = Field()
103from your_project.items import MvItem
104
105...
106
107# COUNTING FUNCTIONS
108def bill_prop_counter(self,response):
109 name = response.meta.get('name')
110 party = response.meta.get('party')
111
112 billproposals = response.xpath("//tr[@valign='TOP']")
113
114 yield MvItem(**{ 'bill_prop_count': len(billproposals),
115 'name': name,
116 'party': party})
117
118def quest_prop_counter(self, response):
119 name = response.meta.get('name')
120 party = response.meta.get('party')
121
122 researchproposals = response.xpath("//tr[@valign='TOP']")
123
124 yield MvItem(**{'res_prop_count': len(researchproposals),
125 'name': name,
126 'party': party})
127
128def speech_counter(self, response):
129 name = response.meta.get('name')
130 party = response.meta.get('party')
131
132 speeches = response.xpath("//tr[@valign='TOP']")
133
134 yield MvItem(**{ 'speech_count' : len(speeches),
135 'name': name,
136 'party': party})
137
The output csv will have all possible columns for the item:
1from scrapy import Spider
2from scrapy.http import Request
3
4import logging
5
6
7class MvSpider(Spider):
8 name = 'mv2'
9 allowed_domains = ['tbmm.gov.tr']
10 start_urls = ['https://www.tbmm.gov.tr/Milletvekilleri/liste']
11
12 def parse(self, response):
13 mv_list = mv_list = response.xpath("//ul[@class='list-group list-group-flush']") #taking all MPs listed
14
15 for mv in mv_list:
16 name = mv.xpath("./li/div/div/a/text()").get() # MP's name taken
17 party = mv.xpath("./li/div/div[@class='col-md-4 text-right']/text()").get().strip() #MP's party name taken
18 partial_link = mv.xpath('.//div[@class="col-md-8"]/a/@href').get()
19 full_link = response.urljoin(partial_link)
20
21 yield Request(full_link, callback = self.mv_analysis, meta = {
22 'name': name,
23 'party': party
24 })
25
26
27 def mv_analysis(self, response):
28 name = response.meta.get('name')
29 party = response.meta.get('party')
30
31 billprop_link_path = response.xpath(".//a[contains(text(),'İmzası Bulunan Kanun Teklifleri')]/@href").get()
32 billprop_link = response.urljoin(billprop_link_path)
33
34 questionprop_link_path = response.xpath(".//a[contains(text(),'Sahibi Olduğu Yazılı Soru Önergeleri')]/@href").get()
35 questionprop_link = response.urljoin(questionprop_link_path)
36
37 speech_link_path = response.xpath(".//a[contains(text(),'Genel Kurul Konuşmaları')]/@href").get()
38 speech_link = response.urljoin(speech_link_path)
39
40 yield Request(billprop_link, callback = self.bill_prop_counter, meta = {
41 'name': name,
42 'party': party
43 }) #number of bill proposals to be requested
44
45 yield Request(questionprop_link, callback = self.quest_prop_counter, meta = {
46 'name': name,
47 'party': party
48 }) #number of question propoesals to be requested
49
50
51 yield Request(speech_link, callback = self.speech_counter, meta = {
52 'name': name,
53 'party': party
54 }) #number of speeches to be requested
55
56
57
58
59# COUNTING FUNCTIONS
60
61
62 def bill_prop_counter(self,response):
63
64 name = response.meta.get('name')
65 party = response.meta.get('party')
66
67 billproposals = response.xpath("//tr[@valign='TOP']")
68
69 yield { 'bill_prop_count': len(billproposals),
70 'name': name,
71 'party': party}
72
73 def quest_prop_counter(self, response):
74
75 name = response.meta.get('name')
76 party = response.meta.get('party')
77
78 researchproposals = response.xpath("//tr[@valign='TOP']")
79
80 yield {'res_prop_count': len(researchproposals),
81 'name': name,
82 'party': party}
83
84 def speech_counter(self, response):
85
86 name = response.meta.get('name')
87 party = response.meta.get('party')
88
89 speeches = response.xpath("//tr[@valign='TOP']")
90
91 yield { 'speech_count' : len(speeches),
92 'name': name,
93 'party': party}
94from scrapy import Item, Field
95
96
97class MvItem(Item):
98 name = Field()
99 party = Field()
100 bill_prop_count = Field()
101 res_prop_count = Field()
102 speech_count = Field()
103from your_project.items import MvItem
104
105...
106
107# COUNTING FUNCTIONS
108def bill_prop_counter(self,response):
109 name = response.meta.get('name')
110 party = response.meta.get('party')
111
112 billproposals = response.xpath("//tr[@valign='TOP']")
113
114 yield MvItem(**{ 'bill_prop_count': len(billproposals),
115 'name': name,
116 'party': party})
117
118def quest_prop_counter(self, response):
119 name = response.meta.get('name')
120 party = response.meta.get('party')
121
122 researchproposals = response.xpath("//tr[@valign='TOP']")
123
124 yield MvItem(**{'res_prop_count': len(researchproposals),
125 'name': name,
126 'party': party})
127
128def speech_counter(self, response):
129 name = response.meta.get('name')
130 party = response.meta.get('party')
131
132 speeches = response.xpath("//tr[@valign='TOP']")
133
134 yield MvItem(**{ 'speech_count' : len(speeches),
135 'name': name,
136 'party': party})
137bill_prop_count,name,party,res_prop_count,speech_count
138,Abdullah DOĞRU,AK Parti,,11
139,Mehmet Şükrü ERDİNÇ,AK Parti,,3
140,Muharrem VARLI,MHP,,13
141,Muharrem VARLI,MHP,0,
142,Jülide SARIEROĞLU,AK Parti,,3
143,İbrahim Halil FIRAT,AK Parti,,7
14420,Burhanettin BULUT,CHP,,
145,Ünal DEMİRTAŞ,CHP,,22
146...
147
Now if you want to have all the three counts in the same row, you'll have to change the design of your spider. Possibly one counting function at the time passing the item in the meta
attribute.
QUESTION
Rails. Puma stops working when instantiating a client of Google Cloud Text-to-Speech (Windows)
Asked 2021-Dec-15 at 22:07I've upgraded my Ruby version from 2.5.x to 2.6.x (and uninstalled the 2.5.x version). And now Puma server stops working when instantiating a client of Google Cloud Text-to-Speech:
1client = Google::Cloud::TextToSpeech.text_to_speech
2
It just exits without giving an error (in Command Prompt). And there is a 'Segmentation fault' message in the 'bash' terminal.
Puma config-file:
1client = Google::Cloud::TextToSpeech.text_to_speech
2max_threads_count = ENV.fetch("RAILS_MAX_THREADS") { 5 }
3min_threads_count = ENV.fetch("RAILS_MIN_THREADS") { max_threads_count }
4threads min_threads_count, max_threads_count
5port ENV.fetch("PORT") { 3000 }
6environment ENV.fetch("RAILS_ENV") { ENV['RACK_ENV'] || "development" }
7pidfile ENV.fetch("PIDFILE") { "tmp/pids/server.pid" }
8# workers ENV.fetch("WEB_CONCURRENCY") { 2 }
9preload_app!
10plugin :tmp_restart
11
The method that works with Google Cloud Text-to-Speech:
1client = Google::Cloud::TextToSpeech.text_to_speech
2max_threads_count = ENV.fetch("RAILS_MAX_THREADS") { 5 }
3min_threads_count = ENV.fetch("RAILS_MIN_THREADS") { max_threads_count }
4threads min_threads_count, max_threads_count
5port ENV.fetch("PORT") { 3000 }
6environment ENV.fetch("RAILS_ENV") { ENV['RACK_ENV'] || "development" }
7pidfile ENV.fetch("PIDFILE") { "tmp/pids/server.pid" }
8# workers ENV.fetch("WEB_CONCURRENCY") { 2 }
9preload_app!
10plugin :tmp_restart
11require "google/cloud/text_to_speech"
12
13# Instantiates a client
14client = Google::Cloud::TextToSpeech.text_to_speech
15
16...
17
Gemfile:
1client = Google::Cloud::TextToSpeech.text_to_speech
2max_threads_count = ENV.fetch("RAILS_MAX_THREADS") { 5 }
3min_threads_count = ENV.fetch("RAILS_MIN_THREADS") { max_threads_count }
4threads min_threads_count, max_threads_count
5port ENV.fetch("PORT") { 3000 }
6environment ENV.fetch("RAILS_ENV") { ENV['RACK_ENV'] || "development" }
7pidfile ENV.fetch("PIDFILE") { "tmp/pids/server.pid" }
8# workers ENV.fetch("WEB_CONCURRENCY") { 2 }
9preload_app!
10plugin :tmp_restart
11require "google/cloud/text_to_speech"
12
13# Instantiates a client
14client = Google::Cloud::TextToSpeech.text_to_speech
15
16...
17gem 'rails', '6.0.1'
18gem 'puma', '3.12.2'
19gem 'google-cloud-text_to_speech', '1.1.1'
20...
21
OS Windows 10.
I'm confused. I don't understand why it happens and how to fix it. I've tried to use the last gem-versions for 'puma' and 'google-cloud-text_to_speech', reinstalled the Google SDK, but it continues happening.
Maybe something's wrong with my credentials?
I get the nil
value when running the configure
method (before instantiating the client):
1client = Google::Cloud::TextToSpeech.text_to_speech
2max_threads_count = ENV.fetch("RAILS_MAX_THREADS") { 5 }
3min_threads_count = ENV.fetch("RAILS_MIN_THREADS") { max_threads_count }
4threads min_threads_count, max_threads_count
5port ENV.fetch("PORT") { 3000 }
6environment ENV.fetch("RAILS_ENV") { ENV['RACK_ENV'] || "development" }
7pidfile ENV.fetch("PIDFILE") { "tmp/pids/server.pid" }
8# workers ENV.fetch("WEB_CONCURRENCY") { 2 }
9preload_app!
10plugin :tmp_restart
11require "google/cloud/text_to_speech"
12
13# Instantiates a client
14client = Google::Cloud::TextToSpeech.text_to_speech
15
16...
17gem 'rails', '6.0.1'
18gem 'puma', '3.12.2'
19gem 'google-cloud-text_to_speech', '1.1.1'
20...
21(byebug) Google::Cloud::TextToSpeech.configure
22<Config: endpoint="texttospeech.googleapis.com" credentials=nil scope=nil lib_name=nil lib_version=nil interceptors=nil timeout=nil metadata=nil retry_policy=nil quota_project=nil>
23
Can somebody please help me?
ANSWER
Answered 2021-Dec-07 at 08:52Try reinstalling ruby-debug
1client = Google::Cloud::TextToSpeech.text_to_speech
2max_threads_count = ENV.fetch("RAILS_MAX_THREADS") { 5 }
3min_threads_count = ENV.fetch("RAILS_MIN_THREADS") { max_threads_count }
4threads min_threads_count, max_threads_count
5port ENV.fetch("PORT") { 3000 }
6environment ENV.fetch("RAILS_ENV") { ENV['RACK_ENV'] || "development" }
7pidfile ENV.fetch("PIDFILE") { "tmp/pids/server.pid" }
8# workers ENV.fetch("WEB_CONCURRENCY") { 2 }
9preload_app!
10plugin :tmp_restart
11require "google/cloud/text_to_speech"
12
13# Instantiates a client
14client = Google::Cloud::TextToSpeech.text_to_speech
15
16...
17gem 'rails', '6.0.1'
18gem 'puma', '3.12.2'
19gem 'google-cloud-text_to_speech', '1.1.1'
20...
21(byebug) Google::Cloud::TextToSpeech.configure
22<Config: endpoint="texttospeech.googleapis.com" credentials=nil scope=nil lib_name=nil lib_version=nil interceptors=nil timeout=nil metadata=nil retry_policy=nil quota_project=nil>
23sudo gem uninstall ruby-debug
24sudo gem install ruby-debug
25
And, can you expand your question with including your Gemfile, and Gemfile.lock
Another aproach may be deleting Gemfile.lock then running bundle install
1client = Google::Cloud::TextToSpeech.text_to_speech
2max_threads_count = ENV.fetch("RAILS_MAX_THREADS") { 5 }
3min_threads_count = ENV.fetch("RAILS_MIN_THREADS") { max_threads_count }
4threads min_threads_count, max_threads_count
5port ENV.fetch("PORT") { 3000 }
6environment ENV.fetch("RAILS_ENV") { ENV['RACK_ENV'] || "development" }
7pidfile ENV.fetch("PIDFILE") { "tmp/pids/server.pid" }
8# workers ENV.fetch("WEB_CONCURRENCY") { 2 }
9preload_app!
10plugin :tmp_restart
11require "google/cloud/text_to_speech"
12
13# Instantiates a client
14client = Google::Cloud::TextToSpeech.text_to_speech
15
16...
17gem 'rails', '6.0.1'
18gem 'puma', '3.12.2'
19gem 'google-cloud-text_to_speech', '1.1.1'
20...
21(byebug) Google::Cloud::TextToSpeech.configure
22<Config: endpoint="texttospeech.googleapis.com" credentials=nil scope=nil lib_name=nil lib_version=nil interceptors=nil timeout=nil metadata=nil retry_policy=nil quota_project=nil>
23sudo gem uninstall ruby-debug
24sudo gem install ruby-debug
25rm -rf Gemfile.lock
26bundle install
27
QUESTION
R - Regular Expression to Extract Text Between Parentheses That Contain Keyword
Asked 2021-Nov-13 at 22:41I need to extract the text from between parentheses if a keyword is inside the parentheses.
So if I have a string that looks like this:
('one', 'CARDINAL'), ('Castro', 'PERSON'), ('Latin America', 'LOC'), ('Somoza', 'PERSON')
And my keyword is "LOC", I just want to extract ('Latin America', 'LOC')
, not the others.
Help is appreciated!!
This is a sample of my data set, a csv file:
1,speech_id,sentence,date,speaker,file,parsed_text,named_entities
20,950094636,Let me state that the one sure way we can make it easy for Castro to continue to gain converts in Latin America is if we continue to support regimes of the ilk of the Somoza family,19770623,Mr. OBEY,06231977.txt,Let me state that the one sure way we can make it easy for Castro to continue to gain converts in Latin America is if we continue to support regimes of the ilk of the Somoza family,"[('one', 'CARDINAL'), ('Castro', 'PERSON'), ('Latin America', 'LOC'), ('Somoza', 'PERSON')]"
31,950094636,That is how we encourage the growth of communism,19770623,Mr. OBEY,06231977.txt,That is how we encourage the growth of communism,[]
42,950094636,That is how we discourage the growth of democracy in Latin America,19770623,Mr. OBEY,06231977.txt,That is how we discourage the growth of democracy in Latin America,"[('Latin America', 'LOC')]"
53,950094636,Mr Chairman,19770623,Mr. OBEY,06231977.txt,Mr Chairman,[]
64,950094636,given the speeches I have made lately about the press,19770623,Mr. OBEY,06231977.txt,given the speeches I have made lately about the press,[]
75,950094636,I am not one,19770623,Mr. OBEY,06231977.txt,I am not one,[]
86,950094636,I suppose,19770623,Mr. OBEY,06231977.txt,I suppose,[]
9
I am trying to extract just parentheses with the word LOC:
1,speech_id,sentence,date,speaker,file,parsed_text,named_entities
20,950094636,Let me state that the one sure way we can make it easy for Castro to continue to gain converts in Latin America is if we continue to support regimes of the ilk of the Somoza family,19770623,Mr. OBEY,06231977.txt,Let me state that the one sure way we can make it easy for Castro to continue to gain converts in Latin America is if we continue to support regimes of the ilk of the Somoza family,"[('one', 'CARDINAL'), ('Castro', 'PERSON'), ('Latin America', 'LOC'), ('Somoza', 'PERSON')]"
31,950094636,That is how we encourage the growth of communism,19770623,Mr. OBEY,06231977.txt,That is how we encourage the growth of communism,[]
42,950094636,That is how we discourage the growth of democracy in Latin America,19770623,Mr. OBEY,06231977.txt,That is how we discourage the growth of democracy in Latin America,"[('Latin America', 'LOC')]"
53,950094636,Mr Chairman,19770623,Mr. OBEY,06231977.txt,Mr Chairman,[]
64,950094636,given the speeches I have made lately about the press,19770623,Mr. OBEY,06231977.txt,given the speeches I have made lately about the press,[]
75,950094636,I am not one,19770623,Mr. OBEY,06231977.txt,I am not one,[]
86,950094636,I suppose,19770623,Mr. OBEY,06231977.txt,I suppose,[]
9regex <- "(?=\\().*? \'LOC.*?(?<=\\))"
10
11
12filtered_df$clean_NE <- str_extract_all(filtered_df$named_entities, regex)
13
The above regular expression does not work. Thanks!
ANSWER
Answered 2021-Nov-13 at 22:41You can use
1,speech_id,sentence,date,speaker,file,parsed_text,named_entities
20,950094636,Let me state that the one sure way we can make it easy for Castro to continue to gain converts in Latin America is if we continue to support regimes of the ilk of the Somoza family,19770623,Mr. OBEY,06231977.txt,Let me state that the one sure way we can make it easy for Castro to continue to gain converts in Latin America is if we continue to support regimes of the ilk of the Somoza family,"[('one', 'CARDINAL'), ('Castro', 'PERSON'), ('Latin America', 'LOC'), ('Somoza', 'PERSON')]"
31,950094636,That is how we encourage the growth of communism,19770623,Mr. OBEY,06231977.txt,That is how we encourage the growth of communism,[]
42,950094636,That is how we discourage the growth of democracy in Latin America,19770623,Mr. OBEY,06231977.txt,That is how we discourage the growth of democracy in Latin America,"[('Latin America', 'LOC')]"
53,950094636,Mr Chairman,19770623,Mr. OBEY,06231977.txt,Mr Chairman,[]
64,950094636,given the speeches I have made lately about the press,19770623,Mr. OBEY,06231977.txt,given the speeches I have made lately about the press,[]
75,950094636,I am not one,19770623,Mr. OBEY,06231977.txt,I am not one,[]
86,950094636,I suppose,19770623,Mr. OBEY,06231977.txt,I suppose,[]
9regex <- "(?=\\().*? \'LOC.*?(?<=\\))"
10
11
12filtered_df$clean_NE <- str_extract_all(filtered_df$named_entities, regex)
13str_extract_all(filtered_df$named_entities, "\\([^()]*'LOC'[^()]*\\)")
14
See the regex demo. Details:
\(
- a(
char[^()]*
- zero or more chars other than(
and)
'LOC'
- a'LOC'
string[^()]*
- zero or more chars other than(
and)
\)
- a)
char.
See the online R demo:
1,speech_id,sentence,date,speaker,file,parsed_text,named_entities
20,950094636,Let me state that the one sure way we can make it easy for Castro to continue to gain converts in Latin America is if we continue to support regimes of the ilk of the Somoza family,19770623,Mr. OBEY,06231977.txt,Let me state that the one sure way we can make it easy for Castro to continue to gain converts in Latin America is if we continue to support regimes of the ilk of the Somoza family,"[('one', 'CARDINAL'), ('Castro', 'PERSON'), ('Latin America', 'LOC'), ('Somoza', 'PERSON')]"
31,950094636,That is how we encourage the growth of communism,19770623,Mr. OBEY,06231977.txt,That is how we encourage the growth of communism,[]
42,950094636,That is how we discourage the growth of democracy in Latin America,19770623,Mr. OBEY,06231977.txt,That is how we discourage the growth of democracy in Latin America,"[('Latin America', 'LOC')]"
53,950094636,Mr Chairman,19770623,Mr. OBEY,06231977.txt,Mr Chairman,[]
64,950094636,given the speeches I have made lately about the press,19770623,Mr. OBEY,06231977.txt,given the speeches I have made lately about the press,[]
75,950094636,I am not one,19770623,Mr. OBEY,06231977.txt,I am not one,[]
86,950094636,I suppose,19770623,Mr. OBEY,06231977.txt,I suppose,[]
9regex <- "(?=\\().*? \'LOC.*?(?<=\\))"
10
11
12filtered_df$clean_NE <- str_extract_all(filtered_df$named_entities, regex)
13str_extract_all(filtered_df$named_entities, "\\([^()]*'LOC'[^()]*\\)")
14library(stringr)
15x <- "[('one', 'CARDINAL'), ('Castro', 'PERSON'), ('Latin America', 'LOC'), ('Somoza', 'PERSON')]"
16str_extract_all(x, "\\([^()]*'LOC'[^()]*\\)")
17# => [1] "('Latin America', 'LOC')"
18
As a bonus solution to get Latin America
, you can use
1,speech_id,sentence,date,speaker,file,parsed_text,named_entities
20,950094636,Let me state that the one sure way we can make it easy for Castro to continue to gain converts in Latin America is if we continue to support regimes of the ilk of the Somoza family,19770623,Mr. OBEY,06231977.txt,Let me state that the one sure way we can make it easy for Castro to continue to gain converts in Latin America is if we continue to support regimes of the ilk of the Somoza family,"[('one', 'CARDINAL'), ('Castro', 'PERSON'), ('Latin America', 'LOC'), ('Somoza', 'PERSON')]"
31,950094636,That is how we encourage the growth of communism,19770623,Mr. OBEY,06231977.txt,That is how we encourage the growth of communism,[]
42,950094636,That is how we discourage the growth of democracy in Latin America,19770623,Mr. OBEY,06231977.txt,That is how we discourage the growth of democracy in Latin America,"[('Latin America', 'LOC')]"
53,950094636,Mr Chairman,19770623,Mr. OBEY,06231977.txt,Mr Chairman,[]
64,950094636,given the speeches I have made lately about the press,19770623,Mr. OBEY,06231977.txt,given the speeches I have made lately about the press,[]
75,950094636,I am not one,19770623,Mr. OBEY,06231977.txt,I am not one,[]
86,950094636,I suppose,19770623,Mr. OBEY,06231977.txt,I suppose,[]
9regex <- "(?=\\().*? \'LOC.*?(?<=\\))"
10
11
12filtered_df$clean_NE <- str_extract_all(filtered_df$named_entities, regex)
13str_extract_all(filtered_df$named_entities, "\\([^()]*'LOC'[^()]*\\)")
14library(stringr)
15x <- "[('one', 'CARDINAL'), ('Castro', 'PERSON'), ('Latin America', 'LOC'), ('Somoza', 'PERSON')]"
16str_extract_all(x, "\\([^()]*'LOC'[^()]*\\)")
17# => [1] "('Latin America', 'LOC')"
18str_extract_all(x, "[^']+(?=',\\s*'LOC'\\))")
19# => [1] "Latin America"
20
Here, [^']+(?=',\s*'LOC'\))
matches one or more chars other than '
that are followed with ',
, zero or more whitespaces, and then 'LOC')
string.
Community Discussions contain sources that include Stack Exchange Network
Tutorials and Learning Resources in Speech
Tutorials and Learning Resources are not available at this moment for Speech