asgiref | ASGI specification and utilities
kandi X-RAY | asgiref Summary
Support
Quality
Security
License
Reuse
- Start the event loop
- Periodic application checks
- Log an exception raised by application
asgiref Key Features
asgiref Examples and Code Snippets
Trending Discussions on asgiref
Trending Discussions on asgiref
QUESTION
It has been very difficult for me trying to deploy to aws EC2 ubuntu server SINCE I'm coming from windows background. I encounter an error while trying to bind django application to gunicorn. The command I'm running is sudo gunicorn --bind 0.0.0.0:8000 logistics.wsgi:application
And the error log is show below:
(venv) ubuntu@ip-172-31-18-196:/var/www/html$ sudo gunicorn --bind 0.0.0.0:8000 logistics.wsgi:application
[2021-09-08 11:21:00 +0000] [29379] [INFO] Starting gunicorn 20.1.0
[2021-09-08 11:21:00 +0000] [29379] [INFO] Listening at: http://0.0.0.0:8000 (29379)
[2021-09-08 11:21:00 +0000] [29379] [INFO] Using worker: sync
[2021-09-08 11:21:00 +0000] [29382] [INFO] Booting worker with pid: 29382
[2021-09-08 11:21:00 +0000] [29382] [ERROR] Exception in worker process
Traceback (most recent call last):
File "/usr/local/lib/python3.5/dist-packages/gunicorn/arbiter.py", line 589, in spawn_worker
worker.init_process()
File "/usr/local/lib/python3.5/dist-packages/gunicorn/workers/base.py", line 134, in init_process
self.load_wsgi()
File "/usr/local/lib/python3.5/dist-packages/gunicorn/workers/base.py", line 146, in load_wsgi
self.wsgi = self.app.wsgi()
File "/usr/local/lib/python3.5/dist-packages/gunicorn/app/base.py", line 67, in wsgi
self.callable = self.load()
File "/usr/local/lib/python3.5/dist-packages/gunicorn/app/wsgiapp.py", line 58, in load
return self.load_wsgiapp()
File "/usr/local/lib/python3.5/dist-packages/gunicorn/app/wsgiapp.py", line 48, in load_wsgiapp
return util.import_app(self.app_uri)
File "/usr/local/lib/python3.5/dist-packages/gunicorn/util.py", line 359, in import_app
mod = importlib.import_module(module)
File "/usr/lib/python3.5/importlib/__init__.py", line 126, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "", line 986, in _gcd_import
File "", line 969, in _find_and_load
File "", line 958, in _find_and_load_unlocked
File "", line 673, in _load_unlocked
File "", line 665, in exec_module
File "", line 222, in _call_with_frames_removed
File "/var/www/html/logistics/wsgi.py", line 12, in
from django.core.wsgi import get_wsgi_application
File "/usr/local/lib/python3.5/dist-packages/django/core/wsgi.py", line 2, in
from django.core.handlers.wsgi import WSGIHandler
File "/usr/local/lib/python3.5/dist-packages/django/core/handlers/wsgi.py", line 3, in
from django.conf import settings
File "/usr/local/lib/python3.5/dist-packages/django/conf/__init__.py", line 19, in
from django.utils.deprecation import RemovedInDjango40Warning
File "/usr/local/lib/python3.5/dist-packages/django/utils/deprecation.py", line 5, in
from asgiref.sync import sync_to_async
File "/usr/local/lib/python3.5/dist-packages/asgiref/sync.py", line 115
launch_map: "Dict[asyncio.Task[object], threading.Thread]" = {}
^
SyntaxError: invalid syntax
[2021-09-08 11:21:00 +0000] [29382] [INFO] Worker exiting (pid: 29382)
[2021-09-08 11:21:00 +0000] [29379] [INFO] Shutting down: Master
[2021-09-08 11:21:00 +0000] [29379] [INFO] Reason: Worker failed to boot.
When I run gunicorn --bind 0.0.0.0:8000 logistics.wsgi:application
, (that is, without sudo
) I get another error:
(venv) ubuntu@ip-172-31-18-196:/var/www/html$ gunicorn --bind 0.0.0.0:8000 logistics.wsgi:application
Traceback (most recent call last):
File "/home/ubuntu/.local/bin/gunicorn", line 7, in
from gunicorn.app.wsgiapp import run
ModuleNotFoundError: No module named 'gunicorn'
But I have already install gunicorn with the command pip3 install gunicorn --user
. The reason why I added --user
at the end is that running pip3 install gunicorn
within the activated virtual enviroment is throwing back permission error as shown below:
(venv) ubuntu@ip-172-31-18-196:/var/www/html$ pip3 install gunicorn
Collecting gunicorn
Using cached https://files.pythonhosted.org/packages/e4/dd/5b190393e6066286773a67dfcc2f9492058e9b57c4867a95f1ba5caf0a83/gunicorn-20.1.0-py3-none-any.whl
Requirement already satisfied: setuptools>=3.0 in ./venv/lib/python3.6/site-packages (from gunicorn) (40.6.2)
Installing collected packages: gunicorn
Could not install packages due to an EnvironmentError: [Errno 13] Permission denied: '/var/www/html/venv/lib/python3.6/site-packages/gunicorn-20.1.0.dist-info'
Consider using the `--user` option or check the permissions.
You are using pip version 18.1, however version 21.2.4 is available.
You should consider upgrading via the 'pip install --upgrade pip' command.
Again, I have already upgraded from python3.5
to python3.6
such that when I run python3 on the terminal I get the following output
(venv) ubuntu@ip-172-31-18-196:/var/www/html$ python3 --version
Python 3.6.13
Yet, I don't know why the error log is making reference to python3.5 instead of python3.6 as shown here: File "/usr/local/lib/python3.5/dist-packages/gunicorn/app/wsgiapp.py"
whenever I run sudo gunicorn --bind 0.0.0.0:8000 logistics.wsgi:application
Please I want to know why I'm getting that error
ANSWER
Answered 2022-Mar-14 at 14:49You have to login as Ubuntu
user and NOT sudo su/root
Stage 1: Binding your Gunicorn to your Django app and checking if the upstream gunicorn is working fine. Please note the deployment is incomplete without other stages
sudo apt-get update
sudo apt-get upgrade
Optional - If it shows a popup/options then just select the pkg maintainer version.
python3 -m venv env
sudo apt-get install python3-venv
source env/bin/activate
pip3 install django
git clone
pip3 install gunicorn
sudo apt-get install -y nginx
cd
to your project directory wheresettings.py
,db.sqlite3
and all those files of your project is stored.pip3 install -r requirements.txt
gunicorn --bind 0.0.0.0:8000 .wsgi:application
Note: your project name is the main app name which you created in the beginning withdjango-admin startproject
commandYou will see
[2021-09-08 15:20:17 +0000] [12789] [INFO] Starting gunicorn 20.1.0
[2021-09-08 15:20:17 +0000] [12789] [INFO] Listening at: http://0.0.0.0:8000 (12789)
[2021-09-08 15:20:17 +0000] [12789] [INFO] Using worker: sync
[2021-09-08 15:20:17 +0000] [12791] [INFO] Booting worker with pid: 12791
which means you have successfully bonded your gunicorn to run your Django app and your Django app is now ready to get linked with a webserver (NGINX in our case). This marks the completion of Stage 1. To test out Stage 1 success you can type in your IP with port :8000
and see your application run (make sure your aws security is allowing port 8000 else you will see a 404) but the above Booting worker with pid confirms that it's working.
Stage 2: Setting up supervisor
so that your Gunicors autostarts your Django app on reboot and after first boot.
sudo apt-get install -y supervisor
cd /etc/supervisor/conf.d/
sudo touch gunicorn.conf
sudo nano gunicorn.conf
This will open up an editor where you have to type in the script for gunicorn (The bind which we did in Stage 1 but now we are telling supervisor to bind gunicorn every time the server/instance start)
[program:gunicorn]
directory = /home/ubuntu/
command = /home/ubuntu/env/bin/gunicorn --workers 3 --bind unix:/home/ubuntu//app.sock .wsgi:application
autostart = true
autorestart = true
stderr_logfile = /var/log/gunicorn/gunicorn.err.log
stdout_logfile = /var/log/gunicorn/gunicorn.out.log
[group:guni]
programs:gunicorn
sudo mkdir /var/log/gunicorn
, here we are creating the log folder for our gunicorn out and error logs.
sudo supervisorctl reread
If you see guni:available
here it means your supervisor is all set and you have properly done everything till now.
sudo supervisorctl update
if you see guni: added process group
this is another marker that you have properly done the steps.
sudo supervisorctl status
If you see guni:gunicorn RUNNING
this is the third and final indicator that your gunicorn is now properly set up. After all these steps it is confirmed that your gunicorn is now bi-directionally communicating to an app.sock
file which is automatically created inside your project directory.
Stage 3: Final step to link your Gunicorn upstream server to your NGINX
cd /etc/nginx/sites-available
sudo touch django.conf
sudo nano django.conf
This will open up your nano editor where you have to type in these exact server settings.
server {
listen 80;
server_name ;
#server_name 192.168.0.1 yourdomain.com your_alternate_domain.com; this is how you can add multiple hosts. Do not add any comma just separate it with spaces
location / {
include proxy_params;
proxy_pass http://unix:/home/ubuntu/Appdir/app.sock;
}
}
sudo nginx -t
(This will test the syntax of your config file)
sudo ln django.conf /etc/nginx/sites-enabled/
<---- this is a very crucial step make sure there is not typing mistake here, it creates a symlink
sudo nginx -t
sudo service nginx restart
With this you now have linked your Nginx to your gunicorn upstream on app.sock so head on to your browser and type in the IP address of your instance and you will see your app live.
If you can see your website without any CSS then you have followed everything properly, I will re-edit my answer on how to serve static files of your Django app in Nginx once you confirm that everything works.
QUESTION
Im trying to run the below Dockerfile using docker-compose. I searched around but I couldnt find a solution on how to install cffi with python:3.9-alpine.
I also read this post which states that pip 21.2.4 or greater can be a possible solution but it didn't work out form me
https://www.pythonfixing.com/2021/09/fixed-why-i-getting-this-error-while.html
Docker file
FROM python:3.9-alpine
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
COPY ./requirements.txt .
RUN apk add --update --no-cache postgresql-client
RUN apk add --update --no-cache --virtual .tmp-build-deps \
gcc libc-dev linux-headers postgresql-dev
RUN pip3 install --upgrade pip && pip3 install -r /requirements.txt
RUN apk del .tmp-build-deps
RUN mkdir /app
WORKDIR /app
COPY . /app
RUN adduser -D user
USER user
This is the requirements.txt file.
asgiref==3.5.0
backports.zoneinfo==0.2.1
certifi==2021.10.8
cffi==1.15.0
cfgv==3.3.1
...
Error message:
process-exited-with-error
#9 47.99
#9 47.99 × Running setup.py install for cffi did not run successfully.
#9 47.99 │ exit code: 1
#9 47.99 ╰─> [58 lines of output]
#9 47.99 Package libffi was not found in the pkg-config search path.
#9 47.99 Perhaps you should add the directory containing `libffi.pc'
#9 47.99 to the PKG_CONFIG_PATH environment variable
#9 47.99 Package 'libffi', required by 'virtual:world', not found
#9 47.99 Package libffi was not found in the pkg-config search path.
#9 47.99 Perhaps you should add the directory containing `libffi.pc'
#9 47.99 to the PKG_CONFIG_PATH environment variable
#9 47.99 Package 'libffi', required by 'virtual:world', not found
#9 47.99 Package libffi was not found in the pkg-config search path.
#9 47.99 Perhaps you should add the directory containing `libffi.pc'
#9 47.99 to the PKG_CONFIG_PATH environment variable
#9 47.99 Package 'libffi', required by 'virtual:world', not found
#9 47.99 Package libffi was not found in the pkg-config search path.
#9 47.99 Perhaps you should add the directory containing `libffi.pc'
#9 47.99 to the PKG_CONFIG_PATH environment variable
#9 47.99 Package 'libffi', required by 'virtual:world', not found
#9 47.99 Package libffi was not found in the pkg-config search path.
#9 47.99 Perhaps you should add the directory containing `libffi.pc'
#9 47.99 to the PKG_CONFIG_PATH environment variable
#9 47.99 Package 'libffi', required by 'virtual:world', not found
#9 47.99 running install
#9 47.99 running build
#9 47.99 running build_py
#9 47.99 creating build
#9 47.99 creating build/lib.linux-aarch64-3.9
#9 47.99 creating build/lib.linux-aarch64-3.9/cffi
#9 47.99 copying cffi/__init__.py -> build/lib.linux-aarch64-3.9/cffi
#9 47.99 copying cffi/cffi_opcode.py -> build/lib.linux-aarch64-3.9/cffi
#9 47.99 copying cffi/commontypes.py -> build/lib.linux-aarch64-3.9/cffi
#9 47.99 copying cffi/vengine_gen.py -> build/lib.linux-aarch64-3.9/cffi
#9 47.99 copying cffi/vengine_cpy.py -> build/lib.linux-aarch64-3.9/cffi
#9 47.99 copying cffi/backend_ctypes.py -> build/lib.linux-aarch64-3.9/cffi
#9 47.99 copying cffi/api.py -> build/lib.linux-aarch64-3.9/cffi
#9 47.99 copying cffi/ffiplatform.py -> build/lib.linux-aarch64-3.9/cffi
#9 47.99 copying cffi/verifier.py -> build/lib.linux-aarch64-3.9/cffi
#9 47.99 copying cffi/error.py -> build/lib.linux-aarch64-3.9/cffi
#9 47.99 copying cffi/setuptools_ext.py -> build/lib.linux-aarch64-3.9/cffi
#9 47.99 copying cffi/lock.py -> build/lib.linux-aarch64-3.9/cffi
#9 47.99 copying cffi/recompiler.py -> build/lib.linux-aarch64-3.9/cffi
#9 47.99 copying cffi/pkgconfig.py -> build/lib.linux-aarch64-3.9/cffi
#9 47.99 copying cffi/cparser.py -> build/lib.linux-aarch64-3.9/cffi
#9 47.99 copying cffi/model.py -> build/lib.linux-aarch64-3.9/cffi
#9 47.99 copying cffi/_cffi_include.h -> build/lib.linux-aarch64-3.9/cffi
#9 47.99 copying cffi/parse_c_type.h -> build/lib.linux-aarch64-3.9/cffi
#9 47.99 copying cffi/_embedding.h -> build/lib.linux-aarch64-3.9/cffi
#9 47.99 copying cffi/_cffi_errors.h -> build/lib.linux-aarch64-3.9/cffi
#9 47.99 warning: build_py: byte-compiling is disabled, skipping.
#9 47.99
#9 47.99 running build_ext
#9 47.99 building '_cffi_backend' extension
#9 47.99 creating build/temp.linux-aarch64-3.9
#9 47.99 creating build/temp.linux-aarch64-3.9/c
#9 47.99 gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -DTHREAD_STACK_SIZE=0x100000 -fPIC -DUSE__THREAD -DHAVE_SYNC_SYNCHRONIZE -I/usr/include/ffi -I/usr/include/libffi -I/usr/local/include/python3.9 -c c/_cffi_backend.c -o build/temp.linux-aarch64-3.9/c/_cffi_backend.o
#9 47.99 c/_cffi_backend.c:15:10: fatal error: ffi.h: No such file or directory
#9 47.99 15 | #include
#9 47.99 | ^~~~~~~
#9 47.99 compilation terminated.
#9 47.99 error: command '/usr/bin/gcc' failed with exit code 1
#9 47.99 [end of output]
#9 47.99
#9 47.99 note: This error originates from a subprocess, and is likely not a problem with pip.
#9 47.99 error: legacy-install-failure
#9 47.99
#9 47.99 × Encountered error while trying to install package.
#9 47.99 ╰─> cffi
#9 47.99
#9 47.99 note: This is an issue with the package mentioned above, not pip.
#9 47.99 hint: See above for output from the failure.
ANSWER
Answered 2022-Mar-06 at 16:29The libffi library is missing.
Add it to your dockerfile:
RUN apk add libffi-dev
QUESTION
I get this Error when I try to install Pyodbc , I have already install visual studio and I have Microsoft Visual C++ 12 , 15-19 in my machine but still its giving this error.
Running setup.py clean for pyodbc
Failed to build pyodbc
Installing collected packages: sqlparse, pytz, asgiref, pyodbc, Django, Pillow, mssql-django, django-crispy-forms
Running setup.py install for pyodbc ... error
ERROR: Command errored out with exit status 1:
command: 'C:\Users\Athar\Desktop\New folder\Project\HeatlhCare\venv\Scripts\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Athar\\AppData\\Local\\Temp\\pip-install-w0wwm18g\\pyodbc_61963e883a8543fea24a63b1c522bbea\\setup.py'"'"'; __file__='"'"'C:\\Users\\Athar\\AppData\\Local\\Temp\\pip-install-w0wwm18g\\pyodbc_61963e883a8543fea24a63b1c522bbea\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\Athar\AppData\Local\Temp\pip-record-t1td50y6\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\Users\Athar\Desktop\New folder\Project\HeatlhCare\venv\include\site\python3.10\pyodbc'
cwd: C:\Users\Athar\AppData\Local\Temp\pip-install-w0wwm18g\pyodbc_61963e883a8543fea24a63b1c522bbea\
Complete output (7 lines):
running install
C:\Users\Athar\Desktop\New folder\Project\HeatlhCare\venv\lib\site-packages\setuptools\command\install.py:34: SetuptoolsDeprecationWarning: setup.py install is deprecated. Use build and pip and other standards-based tools.
warnings.warn(
running build
running build_ext
building 'pyodbc' extension
error: Microsoft Visual C++ 14.0 or greater is required. Get it with "Microsoft C++ Build Tools": https://visualstudio.microsoft.com/visual-cpp-build-tools/
----------------------------------------
ERROR: Command errored out with exit status 1: 'C:\Users\Athar\Desktop\New folder\Project\HeatlhCare\venv\Scripts\python.exe' -u -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\\Users\\Athar\\AppData\\Local\\Temp\\pip-install-w0wwm18g\\pyodbc_61963e883a8543fea24a63b1c522bbea\\setup.py'"'"'; __file__='"'"'C:\\Users\\Athar\\AppData\\Local\\Temp\\pip-install-w0wwm18g\\pyodbc_61963e883a8543fea24a63b1c522bbea\\setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record 'C:\Users\Athar\AppData\Local\Temp\pip-record-t1td50y6\install-record.txt' --single-version-externally-managed --compile --install-headers 'C:\Users\Athar\Desktop\New folder\Project\HeatlhCare\venv\include\site\python3.10\pyodbc' Check the logs for full command output.
ANSWER
Answered 2021-Nov-12 at 13:38The current release of pyodbc (4.0.32) does not have pre-built wheel files for Python 3.10. The easiest way to get it installed at the moment is to download the appropriate wheel from
https://www.lfd.uci.edu/~gohlke/pythonlibs/#pyodbc
and then install it. For example, if you are running 64-bit Python then you would download the 64-bit wheel and use
pip install pyodbc‑4.0.32‑cp310‑cp310‑win_amd64.whl
QUESTION
I'm taking over a project. 5 engineers worked on this for several years, but they are all gone. I've been tasked with trying to revive this project and keep it going. It's a big Python project with several complicated install scripts which, nowadays, have many version errors, because the stuff that worked 3 or 4 years ago is all long since deprecated and possibly discontinued.
Buried deep in one of the many install scripts (they all call each other multiple times, in a spaghetti that I cannot figure out) there is probably an instruction that sets up a virtual environment, but I can't find the line and I don't care. This software is going onto a clean install of an EC2 (with Centos 7) that I control completely. And this piece of software is the only software that will ever run on this EC2 instance, so I'm happy to install everything globally.
The install script was unable to find Python 3.6 so I manually did this:
pip3 install astroid
pip3 install cycler
pip3 install decorator
pip3 install fancycompleter
pip3 install ipython
pip3 install isort
pip3 install kiwisolver
pip3 install lazy-object-proxy
pip3 install matplotlib
pip3 install numpy
pip3 install pillow
pip3 install platformdirs
pip3 install pluggy
pip3 install prompt-toolkit
pip3 install pygments
pip3 install pyparsing
pip3 install pytest
pip3 install tomli
pip3 install typed-ast
pip3 install typing-extensions
pip3 install asgiref
pip3 install charset-normalizer
pip3 install click
pip3 install django
pip3 install django-timezone-field
pip3 install idna
pip3 install markdown
pip3 install markupsafe
pip3 install pyodbc
pip3 install python-nmap
pip3 install pyyaml
pip3 install sqlparse
pip3 install uritemplate
If I go into the Python shell and run help(modules)
I can see the Django is installed.
But when I run the final install script:
/home/centos/blueflow/blueflow/bin/blueflow-django-manage compilejsx -o ./blueflow/jsx/JSXRegistry.jsx
Part of the error reads:
Couldn't import Django. Are you sure it's installed
What would be the fastest way to get the code to see the right path to Django? Is there an obvious way I can break out of all virtual environments and force the code to use the global environment for everything (which I'm thinking would be the simplest approach)?
Courtesy Notice: Pipenv found itself running within a virtual environment, so it will automatically use that environment, instead of creating its own for any project. You can set PIPENV_IGNORE_VIRTUALENVS=1 to force pipenv to ignore that environment and create its own instead. You can set PIPENV_VERBOSITY=-1 to suppress this warning.
Loading .env environment variables...
Warning: There was an unexpected error while activating your virtualenv. Continuing anyway...
Traceback (most recent call last):
File "/home/centos/blueflow/blueflow/app/manage.py", line 20, in
"Couldn't import Django. Are you sure it's installed and "
ImportError: Couldn't import Django. Are you sure it's installed and available on your PYTHONPATH environment variable? Did you forget to activate a virtual environment?
[[ Update ]]
Someone asked about the contents of:
/home/centos/blueflow/blueflow/bin/blueflow-django-manage
This is the whole file:
#!/bin/bash
#
# blueflow-django-manage
#
# Shim for Django's "python manage.py" that supports different plaforms
# Copyright (C) 2018 Virta Laboratories, Inc. All rights reserved.
# Stop on errors
set -e
# Path to BlueFlow installation
BLUEFLOW_HOME=${BLUEFLOW_HOME:=`cd $(dirname $0)/.. && pwd -P`}
# Defaults
PIPENV=${BLUEFLOW_HOME}/bin/blueflow-pipenv
DJANGO_MANAGE="${PIPENV} run python ${BLUEFLOW_HOME}/app/manage.py"
# Platform-specific section
OS=`${BLUEFLOW_HOME}/bin/blueflow-os-detect`
case ${OS} in
"Ubuntu-16.04")
DJANGO_MANAGE="sudo -Eu blueflow ${DJANGO_MANAGE}"
;;
"RHEL7")
if [[ "${DJANGO_SETTINGS_MODULE}" != "app.settings.development" ]]; then
DJANGO_MANAGE="sudo -Eu blueflow ${DJANGO_MANAGE}"
else
DJANGO_MANAGE="sudo -Eu $(id -un) ${DJANGO_MANAGE}"
fi
;;
"RHEL8")
if [[ "${DJANGO_SETTINGS_MODULE}" != "app.settings.development" ]]; then
DJANGO_MANAGE="sudo -Eu blueflow ${DJANGO_MANAGE}"
else
DJANGO_MANAGE="sudo -Eu $(id -un) ${DJANGO_MANAGE}"
fi
;;
esac
# Replace this subshell with command
exec ${DJANGO_MANAGE} "$@"
ANSWER
Answered 2022-Feb-23 at 11:32You can add any path like this:
import sys
sys.path.append("my/path/to/django")
Note, that this is not the recommended way, but rather a fallback option.
You could also do print(sys.path)
and print(sys.executable)
(taken from here) to find out which python you are actually executing.
QUESTION
C:\Users\super\Desktop\Work\Charges2> uvicorn main:app --reload
Console Status:
PS C:\Users\super\Desktop\Work\Charges2> uvicorn main:app --reload
INFO: Will watch for changes in these directories: ['C:\\Users\\super\\Desktop\\Work\\Charges2']
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit)
INFO: Started reloader process [69628] using statreload
WARNING: The --reload flag should not be used in production on Windows.
INFO: Started server process [72184]
INFO: Waiting for application startup.
INFO: Application startup complete.
INFO: 127.0.0.1:61648 - "GET / HTTP/1.1" 500 Internal Server Error
ERROR: Exception in ASGI application
Traceback (most recent call last):
File "C:\Users\super\Desktop\Work\Charges2\venv\lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 364, in run_asgi
result = await app(self.scope, self.receive, self.send)
File "C:\Users\super\Desktop\Work\Charges2\venv\lib\site-packages\uvicorn\middleware\proxy_headers.py", line 75, in __call__
return await self.app(scope, receive, send)
File "C:\Users\super\Desktop\Work\Charges2\venv\lib\site-packages\fastapi\applications.py", line 212, in __call__
await super().__call__(scope, receive, send)
File "C:\Users\super\Desktop\Work\Charges2\venv\lib\site-packages\starlette\applications.py", line 119, in __call__
await self.middleware_stack(scope, receive, send)
File "C:\Users\super\Desktop\Work\Charges2\venv\lib\site-packages\starlette\middleware\errors.py", line 181, in __call__
raise exc
File "C:\Users\super\Desktop\Work\Charges2\venv\lib\site-packages\starlette\middleware\errors.py", line 159, in __call__
await self.app(scope, receive, _send)
File "C:\Users\super\Desktop\Work\Charges2\venv\lib\site-packages\starlette\exceptions.py", line 87, in __call__
raise exc
File "C:\Users\super\Desktop\Work\Charges2\venv\lib\site-packages\starlette\exceptions.py", line 76, in __call__
await self.app(scope, receive, sender)
File "C:\Users\super\Desktop\Work\Charges2\venv\lib\site-packages\starlette\routing.py", line 659, in __call__
await route.handle(scope, receive, send)
File "C:\Users\super\Desktop\Work\Charges2\venv\lib\site-packages\starlette\responses.py", line 50, in __init__
self.init_headers(headers)
File "C:\Users\super\Desktop\Work\Charges2\venv\lib\site-packages\starlette\responses.py", line 77, in init_headers
and not (self.status_code < 200 or self.status_code in (204, 304))
TypeError: '<' not supported between instances of 'NoneType' and 'int'
Code (main.py):
from fastapi import FastAPI
app = FastAPI()
@app.get("/")
async def root():
return {"message": "Hello World"}
anyio==3.5.0
asgiref==3.5.0
click==8.0.3
colorama==0.4.4
fastapi==0.73.0
h11==0.13.0
idna==3.3
pydantic==1.9.0
sniffio==1.2.0
starlette==0.18.0
typing_extensions==4.1.1
uvicorn==0.17.4
Using Python 3.10.2
ANSWER
Answered 2022-Feb-16 at 18:02I get this error when trying to install your packages with pip install -r requirements.txt
:
ERROR: Cannot install -r requirements.txt (line 5) and starlette==0.18.0
because these package versions have conflicting dependencies.
The conflict is caused by:
The user requested starlette==0.18.0
fastapi 0.73.0 depends on starlette==0.17.1
There must be some conflict between your dependencies. Try making a clean install of FastAPI.
If you suspect that there's a version issue, try installing your requirements from scratch.
If you want to prevent such conflicts in the future, a popular solution is to use a dependency management tool, such as pipenv
or poetry
.
QUESTION
- Possibility to run asyncio coroutines.
- Correct celery behavior on exceptions and task retries.
- Possibility to use aioredis lock.
So, how to run async tasks properly to achieve the goal?
What is RuntimeError: await wasn't used with future
(below), how can I fix it?
I have already tried:
1. asgirefasync_to_sync
(from asgiref https://pypi.org/project/asgiref/).
This option makes it possible to run asyncio coroutines, but retries functionality doesn't work.
2. celery-pool-asyncio(https://pypi.org/project/celery-pool-asyncio/)
Same problem as in asgiref. (This option makes it possible to run asyncio coroutines, but retries functionality doesn't work.)
3. write own async to sync decoratorI have performed try to create my own decorator like async_to_sync that runs coroutines threadsafe (asyncio.run_coroutine_threadsafe
), but I have behavior as I described above.
Also I have try asyncio.run()
or asyncio.get_event_loop().run_until_complete()
(and self.retry(...)
) inside celery task. This works well, tasks runs, retries works, but there is incorrect coroutine execution - inside async
function I cannot use aioredis.
Implementation notes:
- start celery command:
celery -A celery_test.celery_app worker -l info -n worker1 -P gevent --concurrency=10 --without-gossip --without-mingle
- celery app:
transport = f"redis://localhost/9"
celery_app = Celery("worker", broker=transport, backend=transport,
include=['tasks'])
celery_app.conf.broker_transport_options = {
'visibility_timeout': 60 * 60 * 24,
'fanout_prefix': True,
'fanout_patterns': True
}
- utils:
@contextmanager
def temp_asyncio_loop():
# asyncio.get_event_loop() automatically creates event loop only for main thread
try:
prev_loop = asyncio.get_event_loop()
except RuntimeError:
prev_loop = None
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)
try:
yield loop
finally:
loop.stop()
loop.close()
del loop
asyncio.set_event_loop(prev_loop)
def with_temp_asyncio_loop(f):
@functools.wraps(f)
def wrapper(*args, **kwargs):
with temp_asyncio_loop() as t_loop:
return f(*args, loop=t_loop, **kwargs)
return wrapper
def await_(coro):
return asyncio.get_event_loop().run_until_complete(coro)
- tasks:
@celery_app.task(bind=True, max_retries=30, default_retry_delay=0)
@with_temp_asyncio_loop
def debug(self, **kwargs):
try:
await_(debug_async())
except Exception as exc:
self.retry(exc=exc)
async def debug_async():
async with RedisLock(f'redis_lock_{datetime.now()}'):
pass
- redis lock
class RedisLockException(Exception):
pass
class RedisLock(AsyncContextManager):
"""
Redis Lock class
:param lock_id: string (unique key)
:param value: dummy value
:param expire: int (time in seconds that key will storing)
:param expire_on_delete: int (time in seconds, set pause before deleting)
Usage:
try:
with RedisLock('123_lock', 5 * 60):
# do something
except RedisLockException:
"""
def __init__(self, lock_id: str, value='1', expire: int = 4, expire_on_delete: int = None):
self.lock_id = lock_id
self.expire = expire
self.value = value
self.expire_on_delete = expire_on_delete
async def acquire_lock(self):
return await redis.setnx(self.lock_id, self.value)
async def release_lock(self):
if self.expire_on_delete is None:
return await redis.delete(self.lock_id)
else:
await redis.expire(self.lock_id, self.expire_on_delete)
async def __aenter__(self, *args, **kwargs):
if not await self.acquire_lock():
raise RedisLockException({
'redis_lock': 'The process: {} still run, try again later'.format(await redis.get(self.lock_id))
})
await redis.expire(self.lock_id, self.expire)
async def __aexit__(self, exc_type, exc_value, traceback):
await self.release_lock()
On my windows machine await redis.setnx(...)
blocks celery worker and it stops producing logs and Ctrl+C
doesn't work.
Inside the docker container, I receive an error. There is part of traceback:
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/aioredis/connection.py", line 854, in read_response
response = await self._parser.read_response()
File "/usr/local/lib/python3.9/site-packages/aioredis/connection.py", line 366, in read_response
raise ConnectionError(SERVER_CLOSED_CONNECTION_ERROR)
aioredis.exceptions.ConnectionError: Connection closed by server.
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/lib/python3.9/site-packages/celery/app/trace.py", line 451, in trace_task
R = retval = fun(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/celery/app/trace.py", line 734, in __protected_call__
return self.run(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/celery/app/autoretry.py", line 54, in run
ret = task.retry(exc=exc, **retry_kwargs)
File "/usr/local/lib/python3.9/site-packages/celery/app/task.py", line 717, in retry
raise_with_context(exc)
File "/usr/local/lib/python3.9/site-packages/celery/app/autoretry.py", line 34, in run
return task._orig_run(*args, **kwargs)
File "/app/celery_tasks/tasks.py", line 69, in wrapper
return f(*args, **kwargs) # <--- inside with_temp_asyncio_loop from utils
...
File "/usr/local/lib/python3.9/contextlib.py", line 575, in enter_async_context
result = await _cm_type.__aenter__(cm)
File "/app/db/redis.py", line 50, in __aenter__
if not await self.acquire_lock():
File "/app/db/redis.py", line 41, in acquire_lock
return await redis.setnx(self.lock_id, self.value)
File "/usr/local/lib/python3.9/site-packages/aioredis/client.py", line 1064, in execute_command
return await self.parse_response(conn, command_name, **options)
File "/usr/local/lib/python3.9/site-packages/aioredis/client.py", line 1080, in parse_response
response = await connection.read_response()
File "/usr/local/lib/python3.9/site-packages/aioredis/connection.py", line 859, in read_response
await self.disconnect()
File "/usr/local/lib/python3.9/site-packages/aioredis/connection.py", line 762, in disconnect
await self._writer.wait_closed()
File "/usr/local/lib/python3.9/asyncio/streams.py", line 359, in wait_closed
await self._protocol._get_close_waiter(self)
RuntimeError: await wasn't used with future
- library versions:
celery==5.2.1
aioredis==2.0.0
ANSWER
Answered 2022-Feb-04 at 07:59Maybe it helps. https://github.com/aio-libs/aioredis-py/issues/1273
The main point is:
replace all the calls to get_event_loop to get_running_loop which would remove that Runtime exception when a future is attached to a different loop.
QUESTION
I read ton of articles, but still can't figure out what I'm missing. I'm running a django website from virtualenv. Here's my config file. The website address is replaced by , can't use that here.
Config
ServerAdmin sidharth@collaboration-management
ServerName
ServerAlias
DocumentRoot /home/sidharth/Downloads/gmcweb
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
Alias /static /home/sidharth/Downloads/gmcweb/static
Require all granted
Require all granted
WSGIDaemonProcess gmcweb python-home=/home/sidharth/Downloads/gmcwebenvlin python-path=/home/sidharth/Downloads/gmcweb
WSGIProcessGroup gmcweb
WSGIScriptAlias / /home/sidharth/Downloads/gmcweb/gmcweb/wsgi.py
Here's my WSGI.py file, didn't change anything never had to earlier
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'gmcweb.settings')
application = get_wsgi_application()
Python Versions
My virtualenv python version is 3.9.5 Default Google VM python version is 3.6.9
Python Installed Libraries
Package Version
------------------------ ---------
asgiref 3.4.0
attrs 21.2.0
autopep8 1.5.7
beautifulsoup4 4.9.3
certifi 2021.5.30
cffi 1.14.5
chardet 4.0.0
cryptography 3.4.7
defusedxml 0.7.1
Django 3.2.4
django-allauth 0.44.0
django-livereload-server 0.3.2
idna 2.10
jsonschema 3.2.0
oauthlib 3.1.1
pip 21.2.3
pycodestyle 2.7.0
pycparser 2.20
PyJWT 2.1.0
pyrsistent 0.18.0
python3-openid 3.2.0
pytz 2021.1
requests 2.25.1
requests-oauthlib 1.3.0
setuptools 57.4.0
six 1.16.0
soupsieve 2.2.1
sqlparse 0.4.1
toml 0.10.2
tornado 6.1
urllib3 1.26.6
I installed apache modwsgi as well sudo apt-get install python3-pip apache2 libapache2-mod-wsgi-py3
Error Log File
[Thu Sep 23 15:05:06.554545 2021] [mpm_event:notice] [pid 32077:tid 140392561593280] AH00489: Apache/2.4.29 (Ubuntu) mod_wsgi/4.5.17 Python/3.6 configured -- resuming normal operations
[Thu Sep 23 15:05:06.554594 2021] [core:notice] [pid 32077:tid 140392561593280] AH00094: Command line: '/usr/sbin/apache2'
[Thu Sep 23 15:05:19.081581 2021] [wsgi:error] [pid 32617:tid 140392409851648] [remote 103.206.177.13:49604] mod_wsgi (pid=32617): Target WSGI script '/home/sidharth/Downloads/gmcweb/gmcweb/wsgi.py' c$
[Thu Sep 23 15:05:19.081638 2021] [wsgi:error] [pid 32617:tid 140392409851648] [remote 103.206.177.13:49604] mod_wsgi (pid=32617): Exception occurred processing WSGI script '/home/sidharth/Downloads/g$
[Thu Sep 23 15:05:19.081828 2021] [wsgi:error] [pid 32617:tid 140392409851648] [remote 103.206.177.13:49604] Traceback (most recent call last):
[Thu Sep 23 15:05:19.081849 2021] [wsgi:error] [pid 32617:tid 140392409851648] [remote 103.206.177.13:49604] File "/home/sidharth/Downloads/gmcweb/gmcweb/wsgi.py", line 12, in
[Thu Sep 23 15:05:19.081853 2021] [wsgi:error] [pid 32617:tid 140392409851648] [remote 103.206.177.13:49604] from django.core.wsgi import get_wsgi_application
[Thu Sep 23 15:05:19.081867 2021] [wsgi:error] [pid 32617:tid 140392409851648] [remote 103.206.177.13:49604] ModuleNotFoundError: No module named 'django'
[Thu Sep 23 15:05:32.244779 2021] [wsgi:error] [pid 32617:tid 140392325842688] [remote 103.206.177.13:52916] mod_wsgi (pid=32617): Target WSGI script '/home/sidharth/Downloads/gmcweb/gmcweb/wsgi.py' c$
[Thu Sep 23 15:05:32.244845 2021] [wsgi:error] [pid 32617:tid 140392325842688] [remote 103.206.177.13:52916] mod_wsgi (pid=32617): Exception occurred processing WSGI script '/home/sidharth/Downloads/g$
[Thu Sep 23 15:05:32.244924 2021] [wsgi:error] [pid 32617:tid 140392325842688] [remote 103.206.177.13:52916] Traceback (most recent call last):
[Thu Sep 23 15:05:32.244946 2021] [wsgi:error] [pid 32617:tid 140392325842688] [remote 103.206.177.13:52916] File "/home/sidharth/Downloads/gmcweb/gmcweb/wsgi.py", line 12, in
[Thu Sep 23 15:05:32.244951 2021] [wsgi:error] [pid 32617:tid 140392325842688] [remote 103.206.177.13:52916] from django.core.wsgi import get_wsgi_application
[Thu Sep 23 15:05:32.244966 2021] [wsgi:error] [pid 32617:tid 140392325842688] [remote 103.206.177.13:52916] ModuleNotFoundError: No module named 'django'
ANSWER
Answered 2021-Sep-23 at 15:28The error says that either you haven't got Django installed or didn't activate the virtual environment in which the Django was installed. Make sure that you check the list of installed packages and find Django in there, via:
$pip list
QUESTION
I have tried the similar problems' solutions on here but none seem to work. It seems that I get a memory error when installing tensorflow from requirements.txt. Does anyone know of a workaround? I believe that installing with --no-cache-dir would fix it but I can't figure out how to get EB to do that. Thank you.
Logs:
----------------------------------------
Collecting tensorflow==2.8.0
Downloading tensorflow-2.8.0-cp38-cp38-manylinux2010_x86_64.whl (497.6 MB)
2022/02/05 22:08:17.264961 [ERROR] An error occurred during execution of command [app-deploy] - [InstallDependency]. Stop running the command. Error: fail to install dependencies with requirements.txt file with error Command /bin/sh -c /var/app/venv/staging-LQM1lest/bin/pip install -r requirements.txt failed with error exit status 2. Stderr:ERROR: Exception:
Traceback (most recent call last):
File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/pip/_internal/cli/base_command.py", line 164, in exc_logging_wrapper
status = run_func(*args)
File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/pip/_internal/cli/req_command.py", line 205, in wrapper
return func(self, options, args)
File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/pip/_internal/commands/install.py", line 338, in run
requirement_set = resolver.resolve(
File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/pip/_internal/resolution/resolvelib/resolver.py", line 92, in resolve
result = self._result = resolver.resolve(
File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/pip/_vendor/resolvelib/resolvers.py", line 482, in resolve
state = resolution.resolve(requirements, max_rounds=max_rounds)
File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/pip/_vendor/resolvelib/resolvers.py", line 349, in resolve
self._add_to_criteria(self.state.criteria, r, parent=None)
File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/pip/_vendor/resolvelib/resolvers.py", line 173, in _add_to_criteria
if not criterion.candidates:
File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/pip/_vendor/resolvelib/structs.py", line 151, in __bool__
return bool(self._sequence)
File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py", line 155, in __bool__
return any(self)
File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py", line 143, in
return (c for c in iterator if id(c) not in self._incompatible_ids)
File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py", line 47, in _iter_built
candidate = func()
File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/pip/_internal/resolution/resolvelib/factory.py", line 201, in _make_candidate_from_link
self._link_candidate_cache[link] = LinkCandidate(
File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 281, in __init__
super().__init__(
File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 156, in __init__
self.dist = self._prepare()
File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 225, in _prepare
dist = self._prepare_distribution()
File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 292, in _prepare_distribution
return preparer.prepare_linked_requirement(self._ireq, parallel_builds=True)
File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/pip/_internal/operations/prepare.py", line 482, in prepare_linked_requirement
return self._prepare_linked_requirement(req, parallel_builds)
File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/pip/_internal/operations/prepare.py", line 527, in _prepare_linked_requirement
local_file = unpack_url(
File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/pip/_internal/operations/prepare.py", line 213, in unpack_url
file = get_http_url(
File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/pip/_internal/operations/prepare.py", line 94, in get_http_url
from_path, content_type = download(link, temp_dir.path)
File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/pip/_internal/network/download.py", line 145, in __call__
for chunk in chunks:
File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/pip/_internal/cli/progress_bars.py", line 144, in iter
for x in it:
File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/pip/_internal/network/utils.py", line 63, in response_chunks
for chunk in response.raw.stream(
File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/pip/_vendor/urllib3/response.py", line 576, in stream
data = self.read(amt=amt, decode_content=decode_content)
File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/pip/_vendor/urllib3/response.py", line 519, in read
data = self._fp.read(amt) if not fp_closed else b""
File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/pip/_vendor/cachecontrol/filewrapper.py", line 65, in read
self._close()
File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/pip/_vendor/cachecontrol/filewrapper.py", line 52, in _close
self.__callback(self.__buf.getvalue())
File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/pip/_vendor/cachecontrol/controller.py", line 309, in cache_response
cache_url, self.serializer.dumps(request, response, body=body)
File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/pip/_vendor/cachecontrol/serialize.py", line 72, in dumps
return b",".join([b"cc=4", msgpack.dumps(data, use_bin_type=True)])
File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/pip/_vendor/msgpack/__init__.py", line 35, in packb
return Packer(**kwargs).pack(o)
File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/pip/_vendor/msgpack/fallback.py", line 960, in pack
self._pack(obj)
File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/pip/_vendor/msgpack/fallback.py", line 943, in _pack
return self._pack_map_pairs(
File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/pip/_vendor/msgpack/fallback.py", line 1045, in _pack_map_pairs
self._pack(v, nest_limit - 1)
File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/pip/_vendor/msgpack/fallback.py", line 943, in _pack
return self._pack_map_pairs(
File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/pip/_vendor/msgpack/fallback.py", line 1045, in _pack_map_pairs
self._pack(v, nest_limit - 1)
File "/var/app/venv/staging-LQM1lest/lib/python3.8/site-packages/pip/_vendor/msgpack/fallback.py", line 889, in _pack
return self._buffer.write(obj)
MemoryError
2022/02/05 22:08:17.264976 [INFO] Executing cleanup logic
2022/02/05 22:08:17.265065 [INFO] CommandService Response: {"status":"FAILURE","api_version":"1.0","results":[{"status":"FAILURE","msg":"Engine execution has encountered an error.","returncode":1,"events":[{"msg":"Instance deployment failed to install application dependencies. The deployment failed.","timestamp":1644098897,"severity":"ERROR"},{"msg":"Instance deployment failed. For details, see 'eb-engine.log'.","timestamp":1644098897,"severity":"ERROR"}]}]}
Requirements.txt:
absl-py==1.0.0
asgiref==3.5.0
astunparse==1.6.3
awsebcli==3.20.3
backports.zoneinfo==0.2.1
botocore==1.23.49
cachetools==5.0.0
cement==2.8.2
certifi==2021.10.8
charset-normalizer==2.0.11
colorama==0.4.3
cycler==0.11.0
Django==4.0.2
django-crispy-forms==1.14.0
django-environ==0.8.1
flatbuffers==2.0
fonttools==4.29.1
future==0.16.0
gast==0.5.3
google-auth==2.6.0
google-auth-oauthlib==0.4.6
google-pasta==0.2.0
grpcio==1.43.0
h5py==3.6.0
idna==3.3
importlib-metadata==4.10.1
imutils==0.5.4
jmespath==0.10.0
keras==2.8.0
Keras-Preprocessing==1.1.2
kiwisolver==1.3.2
libclang==13.0.0
Markdown==3.3.6
matplotlib==3.5.1
numpy==1.22.2
oauthlib==3.2.0
opencv-python==4.5.5.62
opt-einsum==3.3.0
packaging==21.3
pathspec==0.9.0
Pillow==9.0.1
protobuf==3.19.4
psycopg2-binary==2.9.3
pyasn1==0.4.8
pyasn1-modules==0.2.8
pyparsing==3.0.7
python-dateutil==2.8.2
PyYAML==5.4.1
requests==2.26.0
requests-oauthlib==1.3.1
rsa==4.8
semantic-version==2.8.5
six==1.14.0
sqlparse==0.4.2
tensorboard==2.8.0
tensorboard-data-server==0.6.1
tensorboard-plugin-wit==1.8.1
tensorflow==2.8.0
tensorflow-io-gcs-filesystem==0.24.0
termcolor==1.1.0
tf-estimator-nightly==2.8.0.dev2021122109
typing_extensions==4.0.1
tzdata==2021.5
urllib3==1.26.8
wcwidth==0.1.9
Werkzeug==2.0.2
wrapt==1.13.3
zipp==3.7.0
ANSWER
Answered 2022-Feb-05 at 22:37The error says MemoryError
. You must upgrade your ec2 instance to something with more memory. tensorflow
is very memory hungry application.
QUESTION
I created a django project, set up a virtual environment, and added django with poetry add
.
inside pyproject.toml:
[tool.poetry.dependencies]
python = "^3.9"
psycopg2-binary = "^2.9.3"
Django = "^4.0.1"
Inside venv I run poetry show
:
asgiref 3.5.0 ASGI specs, helper code, and adapters
django 4.0.1 A high-level Python web framework that encourages rapid development and clean, pragmatic design.
psycopg2-binary 2.9.3 psycopg2 - Python-PostgreSQL Database Adapter
sqlparse 0.4.2 A non-validating SQL parser.
When I run command to create an app:
p manage.py startapp users apps/users
I get this error:
(base) ┌──(venv)─(tesla㉿kali)-[~/Documents/projects/graphql/graphenee]
└─$ p 1 ⨯
Python 3.9.7 (default, Sep 16 2021, 13:09:58)
[GCC 7.5.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import django
Traceback (most recent call last):
File "", line 1, in
ModuleNotFoundError: No module named 'django'
>>>
venv is set, activated, django is installed but I am still getting this error. Inside virtual envrionment I start python shell and import django:
Python 3.9.7 (default, Sep 16 2021, 13:09:58)
[GCC 7.5.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import django
Traceback (most recent call last):
File "", line 1, in
ModuleNotFoundError: No module named 'django'
Django is also globally installed and when I start the python shell in global environment, I can import django:
Python 3.9.7 (default, Sep 16 2021, 13:09:58)
[GCC 7.5.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import django
>>>
ANSWER
Answered 2022-Jan-31 at 06:29It seems that you have manually created a virtual env in the project directory by e.g. python -m venv venv
. So now you have one in /home/tesla/Documents/projects/graphql/graphenee/venv/
.
After that you added some packages with poetry. However, by default poetry will only look for .venv
directory (note the starting dot) in the project directory. Since poetry did not find a .venv
, it created a new virtual env in /home/tesla/.cache/pypoetry/virtualenvs/graphenee-CXeG5cZ_-py3.9
and installed the packages you added via poetry add
there.
The problem is that you try to use the "empty" virtual env in the project directory instead of the one created by poetry. Fortunately with poetry it is very easy to run command, even without activating the venv, just use poetry run
in the project directory.
To check Django installation:
poetry run python
# Executes: /home/tesla/.cache/pypoetry/virtualenvs/graphenee-CXeG5cZ_-py3.9/bin/python
>>> import django
To run Django management commands:
poetry run ./manage.py startapp users apps/users
It will use the virtual env in /home/tesla/.cache/pypoetry/virtualenvs/graphenee-CXeG5cZ_-py3.9
. You can delete venv
in the project directory.
Note: if you rather want to use a virtual env in the project directory, then delete /home/tesla/.cache/pypoetry/virtualenvs/graphenee-CXeG5cZ_-py3.9
, then create one in the project directory by
python -m venv .venv`
After that install packages with poetry:
poetry install
Now poetry will use the local virtual env in /home/tesla/Documents/projects/graphql/graphenee/.venv
when you run a command via poetry run [cmd]
.
QUESTION
I'm trying to use Gmail api in python to send email but I cant get past importing the Google module despite using "pip install --upgrade google-api-python-client" or "pip install google".
However pip freeze shows:
asgiref==3.3.4
beautifulsoup4==4.10.0
cachetools==4.2.2
certifi==2021.5.30
cffi==1.14.6
charset-normalizer==2.0.6
dj-database-url==0.5.0
Django==3.2.3
django-ckeditor==6.1.0
django-ckeditor-5==0.0.14
django-heroku==0.3.1
django-js-asset==1.2.2
django-phone-field==1.8.1
django-tawkto==0.3
**google==3.0.0**
google-api-core==2.0.1
google-api-python-client==2.21.0
google-auth==2.1.0
google-auth-httplib2==0.1.0
google-auth-oauthlib==0.4.6
google-cloud==0.34.0
google-cloud-bigquery==2.26.0
google-cloud-core==2.0.0
google-cloud-storage==1.42.2
google-cloud-vision==2.4.2
google-crc32c==1.1.2
google-resumable-media==2.0.2
googleapis-common-protos==1.53.0
grpcio==1.40.0
gunicorn==20.1.0
httplib2==0.19.1
idna==3.2
oauthlib==3.1.1
packaging==21.0
Pillow==8.2.0
proto-plus==1.19.0
protobuf==3.18.0
psycopg2==2.8.6
pyasn1==0.4.8
pyasn1-modules==0.2.8
pycparser==2.20
pyparsing==2.4.7
python-decouple==3.4
pytz==2021.1
requests==2.26.0
requests-oauthlib==1.3.0
rsa==4.7.2
six==1.16.0
soupsieve==2.2.1
sqlparse==0.4.1
uritemplate==3.0.1
urllib3==1.26.6
whitenoise==5.2.0
my code:
from Google import Create_Service
import base64
from email.mime.multipart import MIMEMultipart
from email.mime.text import MIMEText
CLIENT_SECRET_FILE = 'client_secret.json'
API_NAME = 'gmail'
API_VERSION = 'v1'
SCOPES = ['https://mail.google.com/']
service = Create_Service(CLIENT_SECRET_FILE, API_NAME, API_VERSION, SCOPES)
Any help would be greatly appreciated
ANSWER
Answered 2021-Sep-20 at 10:55Implicit relative imports are not anymore supported as documented:
There is no longer any implicit import machinery
So if Google.py
is in the same directory as the code you pasted, you have to reference it's realtive location explicitly.
from .Google import Create_Service # Notice the dot (.)
Or it can also be an absolute path. Assuming this is a Django project, then it would be something like:
from my_proj.Google import Create_Service # This assumes that your file is in my_proj/my_proj/Google.py
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install asgiref
You can use asgiref like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.
Support
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesExplore Kits - Develop, implement, customize Projects, Custom Functions and Applications with kandi kits
Save this library and start creating your kit
Share this Page