uwsgi-docs | Official uWSGI docs , examples , tutorials , tips and tricks | Learning library
kandi X-RAY | uwsgi-docs Summary
kandi X-RAY | uwsgi-docs Summary
Official uWSGI docs, examples, tutorials, tips and tricks
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Translate a file into rst format
- Parse text
- Generate rst file
- Given a sequence of tokens coalesce them together
- Change indent level
- Write string to stream
- Rewrite a wiki string
- Begins indentation
- End indentation
- Register a rule
uwsgi-docs Key Features
uwsgi-docs Examples and Code Snippets
Community Discussions
Trending Discussions on uwsgi-docs
QUESTION
I use supervisor
and uwsgi
to start my Django. Here is the conf
in supervisor.
ANSWER
Answered 2022-Jan-14 at 07:22Good day!
You are right, logformat
handles only logging of the uwsgi
.
If you want your Django app to log events you will need to add 2 things:
- Update
settings.py
:
QUESTION
So, I started working with uWSGI for my python application just two days ago and I'm trying to understand the various parameters we specify in an .ini
file. This is what my app.ini
file currently looks like:
ANSWER
Answered 2021-Oct-07 at 13:27First of all, the number of cores is not necessarily the number of processors. On early computers days it was like 1-1, but with modern improvements one processor can offer more than one core. (Check this: https://www.tomshardware.com/news/cpu-core-definition,37658.html). So, if it detected 12 cores, you can use it as the basis for your calculus.
WSGI processesThe number of processes means how many different parallel instances of your web application will be running in that server. WSGI first creates a master process that will coordinate things. Then it bootstraps your application and creates N clones of it (fork). These child forked processes are isolated, they don't share resources. If one process get's unhealthy for any reason (e.g. I/O problems), it can terminate or even be voluntarily killed by the master process while the rest of the clones keep working, so your application is still up and running. When a process is terminated/killed the master process can create another fresh clone to replace it (re-spawn).
It is OK to set the number of processes as a ratio of the available cores. But there's no benefit of increasing it too much. So, you definitely shouldn't set it to the limit (2784). Remember that the operating system will round-robin across all processes to give each one a chance of have some instructions processed. So, if it offers 12 cores and you create like 1000 different processes, you are just putting stress on the system and you'll end up getting the same throughput (or even a worse throughput since there's so much chaos).
Number of threads inside a processThen we move on to the number of threads. For the sake of simplicity, let's just say that the number of threads means the number of parallel requests each of these child process can handle. While one thread is waiting for a database response to answer a request, another thread could be doing something else to answer another request.
You may say: why do I need multiple threads if I already have multiple processes?
A process is an expensive thing, but threads are just the way you can parallel the workload a process can handle. Imagine that a process is a Coffee Shop, and threads are the number of attendants you have inside. You can spread 10 different Coffee Shops units around the city. If one of them closes, there's still another 9 somewhere else the customer can go. But each shop need an amount of attendants to serve people the best way possible.
How to set these numbers correctlyIf you set just a single process with 100 threads, that means that 100 is your concurrency limit. If at some point there's 101 concurrent requests to your application, that last one will have to wait for one of the 100 first to be finished. That's when you start to get an increasing response time for some users. The more the requests are queued, the worse it gets (queuing theory).
Besides that, since you have a single process, if it crashes, all these 100 requests will fail with a server error (500). So, it's wiser to have more processes, let's say 4 processes handling 25 threads each one. You still have the 100 concurrency limit, but your application is more resilient.
It's up to you to get to know your application expected load so you can adjust these numbers properly. When you have external integrations like databases, you have to consider it's limitations also. Let's say a PostgreSQL Server that can handle 100 simultaneous connections. If you have 10 WSGI processes, 40 threads each (with a connection pool of size 40 as well), then there's the possibility that you stress the database with 400 connections, and then you have a big problem, but that's not your case!
So, just use the suggested number of processes (12 * 2 = 24
) and set as much threads as needed to offer a certain desired level of concurrency.
If you don't know the expected load, I suggest you to make some sort of performance test that can simulate requests to your application and then you can experiment different loads and settings and check for it's side effects.
Extra: ContainersIf you are running your application in a container orchestration platform, like Kubernetes, then you can probably have multiple balanced containers serving the same application. You can even make it dynamic so that the number of containers increases if memory or processing go beyond a threshold. That means that on top of all those WSGI fine tuning for a single server, there's also other modern layers of configurations that can help you face peaks and high load scenarios.
QUESTION
After reading the uWSGI's documentation on reloading, my understanding was that, for an app that uses lazy-apps
, writing w
to uWSGI's master FIFO should trigger a restart of all workers (and hence activate changes in the Python code).
However, that doesn't seem to work for me. I need to restart the systemd
service (systemctl restart myservice
) for code changes to take effect. Am I misunderstanding the documentation, or is there an issue with my setup?
My myservice.service
file looks like this:
ANSWER
Answered 2021-May-03 at 11:27All I know about uWSGI is that it exists, but I noticed a mistake here:
QUESTION
I tried to follow the nginx document here https://uwsgi-docs.readthedocs.io/en/latest/tutorials/Django_and_nginx.html. But stuck at Configure nginx for your site
. I restarted nginx and it said "nginx: [emerg] open() "/home/hanys/oligoweb/uwsgi_params" failed (13: Permission denied) in /etc/nginx/sites-enabled/oligoweb.conf:19
".
My site.ini:
ANSWER
Answered 2020-Dec-23 at 14:22In general, uwsgi_params
is already shipped with your Nginx, so all you need is include uwsgi_params
(so it refers to /etc/nginx/uwsgi_params
or similar).
If that is not the case, you will likely also need to give Nginx enough permissions to read the directory structure that file is in, not just the file itself.
QUESTION
I have a Flask application application that I run with uWSGI. My clients have access to the server where the app runs.
How can I protect or hide my source code?
Edit: I found that you can embed an app in uWSGI by building it from source, but that seems far fetched.
...ANSWER
Answered 2020-Oct-30 at 13:45True - if someone wants it badly enough, the only way to truly secure your algorithms is not to hand them out. But, reality is that code is hard to understand anyway. Often, just not documenting code is enough to discourage. There are some techniques, however, and your effort varies with how secure they are. Some approaches that come to mind are.
Compile to bytecode: I've seen it done in the wild, there was company that made a Python email client for Linux / Outlook. I recall that it was obfuscated through a compiled distribution. You'd have to research the proper tool.
Obfuscate at a per-script level: Check out the pyminifier tool. It can make your scripts pretty near impossible to read (but it can be reversed with reasonable effort)
Use an advanced obfuscator: Look at pyarmor. It is a lot more complex and will be harder to implement -- but it looks like it would get the job done.
Open source it. Seems counter intuitive -- but algorithms are rarely the most valuable aspect of code. Having the skill, time, resources to understand and maintain it is. It is very likely that it doesn't matter if anyone sees your code. If you are giving a good service to your customer, they generally have much better things to do than to take on your code base. There are plenty of enterprise companies making a living from open source software (eg. Startburst, 2nd Quadrant.)
(Example code obfuscated using pyminifier)
QUESTION
Versions i use:
uWSGI: 2.0.19.1 (64bit)
os: Linux-3.10.0-1062.4.1.el7.x86_64
I am currently want to set up my vassal app with the uWSGI cheaper subsystem to handle the workers etc.
I decided to use the "spare2" algorithm, like in the uWSGI Doc´s explained.
https://uwsgi-docs.readthedocs.io/en/latest/Cheaper.html?highlight=spare2#spare2-cheaper-algorithm
However i get this message in my app log
...ANSWER
Answered 2020-Sep-09 at 21:38Yeah I ran into the same problem, debugging for hours why spare2
was behaving exactly like spare
would, without noticing the log line saying that spare2
was unavailable.
Anyway, yes, the PyPI version of uwsgi
is 2.0.x
while the documentation and github code in master
are 2.1.x
. From what I'm reading, this difference has been around for quite some time.
The author of spare2
kindly backported the plugin to 2.0.x
: https://github.com/KLab/uwsgi-cheaper-spare2.
I'm inclined to use the built-in busyness
, but then, in 2.1.x
the situation will reverse: spare2
is built-in and busyness
is plug-in.
QUESTION
Note: I am new to django and its deployment.
Deployed django through uwsgi and nginx according to the steps mentioned in this guide - except the emperor-vassal configuration and without any virtual environment.
Side note: The site comes up using
python3 manage.py 0.0.0.0:8800
But, it seems that nginx is facing permission issues in the socket and giving a 502 bad gateway error in the browser.
The nginx error log shows the following error:
2020/07/08 21:05:40 [crit] 3943#3943: *3 connect() to unix:///home/ubuntu/deploymenttst/MySite/MySite.sock failed (13: Permission denied) while connecting to upstream, client: 192.168.12.12, server: 192.168.12.12, request: "GET / HTTP/1.1", upstream: "uwsgi://unix:///home/ubuntu/deploymenttst/MySite/MySite.sock:", host: "192.168.12.12:8400"
The configuration are as follows:
In settings.py file of the project, the configuration are set as (apart from the default wsgi):
...
ANSWER
Answered 2020-Jul-09 at 10:20Solved the issue by placing the project to /tmp directory.
nginx being run from www-data user was not able to access the internal directory MySite and thus the socket or the files placed there, despite being assigned to the user www-data.
Now, my other question is regarding the cause of permission issue for nginx, despite providing the the uid and gid of the directory to www-data, what could have been the issue?
Note: My user named ubuntu is a sudoer.
QUESTION
I am aware of this question: Django Uploaded images not displayed in development , I have done everything that it is described, but still can't find a solution. I also have used for reference this: GeeksForGeeks and Uploaded Files and Uploaded Handlers - Django documentation, however, none of them solved my problem.
I have deployed a Django App on a Ubuntu server for the first time using Nginx and gunicorn. Before deployment, I used port 8000 to test if everything runs as it is supposed to and all was fine. Since I allowed 'Nginx Full' my database images are not showing up.
This is my django project structure:
My virtual environment folder and my main project folder are both in the same directory. I have separated them.
...ANSWER
Answered 2020-Jun-21 at 05:16You may need to run commmand
python manage.py collectstatic
from the shell of your platform
if you are using heroku here i the command
QUESTION
Can uWSGI be a web-server and application server at the same time?
For example, stand-alone WSGI Containers https://flask.palletsprojects.com/en/1.1.x/deploying/wsgi-standalone/ But again, it recommends to use an HTTP server. Why? Can't uWSGI handle HTTP requests?
I have read different articles about deploying a Flask application. They say, I'd need uWSGI and nginx - one popular option.
https://uwsgi-docs.readthedocs.io/en/latest/tutorials/Django_and_nginx.html
https://flask.palletsprojects.com/en/1.1.x/deploying/uwsgi/#uwsgi
My Flask application. app_service.py
...ANSWER
Answered 2020-Jun-03 at 19:11For running flask, you do not need nginx, just a webserver of your choice, but life with nginx is just easier. If you are using Apache, you want to consider to use a WSGI.
I remember reading somewhere in the Flask documentation what is stated by an answer to "Are a WSGI server and HTTP server required to serve a Flask app?" as
The answer is similar for "should I use a web server". WSGI servers happen to have HTTP servers but they will not be as good as a dedicated production HTTP server (Nginx, Apache, etc.).
The main idea behind is the architectural principle of splitting layers to ease debugging and increase security, similarly to the concept that you split content and structure (HTML & CSS, UI vs. API):
- For the lower layers, see e.g. https://en.wikipedia.org/wiki/Transport_layer Having a dedicated HTTP server allows you to do package-filtering etc. on that level.
- The WSGI is the interface layer between the webserver and the webframework.
I have seen clients only running a WSGI server alone, with integrated HTTP support. Using an additional webserver and/ or proxy is just good practice, but IMHO not strictly necessary.
References- https://flask.palletsprojects.com/en/1.1.x/deploying/mod_wsgi/ describes the Apache way for flask
- https://flask.palletsprojects.com/en/1.1.x/tutorial/deploy/ elaborates on how a production environment should look like
- Deploying Python web app (Flask) in Windows Server (IIS) using FastCGI
- Debugging a Flask app running in Gunicorn
- Flask at first run: Do not use the development server in a production environment
QUESTION
I am trying to troubleshoot my uwsgi app not holding on load. Keep in mind I'm quite new to app development, not to mention uwsgi itself.
I found a lot of examples of useful things to check (e.g. here or here), and uwsgi --help | grep "relevant-option-name"
gave me good info on how to get the desired behaviour.
However I couldn't find the default values used by uwsgi for options like --reload-on-rss
or --max-requests
. Where can I find them ?
ANSWER
Answered 2020-Mar-26 at 00:16Turns out default values are indicated in help when they exist and can be accessed through uwsgi --help | grep 'default'
which yields a list with e.g. things like
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install uwsgi-docs
You can use uwsgi-docs like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page