What makes Python language an ideal choice for developing applications? It offers higher-level functions and higher-level data types than other programming languages. It also provides easy way to access and manipulate those data in an efficient way. Python is used regularly in mainstream software such as AI, data science, networking, gaming and more.

Popular New Releases in Python

youtube-dl

youtube-dl 2021.12.17

models

TensorFlow Official Models 2.7.1

thefuck

transformers

v4.18.0: Checkpoint sharding, vision models

flask

Popular Libraries in Python

public-apis

by public-apis doticonpythondoticon

star image 184682 doticonMIT

A collective list of free APIs

system-design-primer

by donnemartin doticonpythondoticon

star image 143449 doticonNOASSERTION

Learn how to design large-scale systems. Prep for the system design interview. Includes Anki flashcards.

Python

by TheAlgorithms doticonpythondoticon

star image 117097 doticonMIT

All Algorithms implemented in Python

Python-100-Days

by jackfrued doticonpythondoticon

star image 114192 doticon

Python - 100天从新手到大师

youtube-dl

by ytdl-org doticonpythondoticon

star image 108335 doticonUnlicense

Command-line program to download videos from YouTube.com and other video sites

awesome-python

by vinta doticonpythondoticon

star image 102379 doticonNOASSERTION

A curated list of awesome Python frameworks, libraries, software and resources

models

by tensorflow doticonpythondoticon

star image 73392 doticonNOASSERTION

Models and examples built with TensorFlow

thefuck

by nvbn doticonpythondoticon

star image 65678 doticonMIT

Magnificent app which corrects your previous console command.

django

by django doticonpythondoticon

star image 63447 doticonNOASSERTION

The Web framework for perfectionists with deadlines.

Trending New libraries in Python

yolov5

by ultralytics doticonpythondoticon

star image 25236 doticonGPL-3.0

YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite

yt-dlp

by yt-dlp doticonpythondoticon

star image 22499 doticonUnlicense

A youtube-dl fork with additional features and fixes

MockingBird

by babysor doticonpythondoticon

star image 20425 doticonNOASSERTION

🚀AI拟声: 5秒内克隆您的声音并生成任意语音内容 Clone a voice in 5 seconds to generate arbitrary speech in real-time

Depix

by beurtschipper doticonpythondoticon

star image 19784 doticonNOASSERTION

Recovers passwords from pixelized screenshots

PaddleOCR

by PaddlePaddle doticonpythondoticon

star image 19581 doticonApache-2.0

Awesome multilingual OCR toolkits based on PaddlePaddle (practical ultra lightweight OCR system, support 80+ languages recognition, provide data annotation and synthesis tools, support training and deployment among server, mobile, embedded and IoT devices)

GFPGAN

by TencentARC doticonpythondoticon

star image 17269 doticonNOASSERTION

GFPGAN aims at developing Practical Algorithms for Real-world Face Restoration.

copilot-docs

by github doticonpythondoticon

star image 16816 doticonCC-BY-4.0

Documentation for GitHub Copilot

diagrams

by mingrammer doticonpythondoticon

star image 16552 doticonMIT

:art: Diagram as Code for prototyping cloud system architectures

jina

by jina-ai doticonpythondoticon

star image 14316 doticonApache-2.0

Cloud-native neural search framework for 𝙖𝙣𝙮 kind of data

Top Authors in Python

1

crowdbotics-apps

4429 Libraries

star icon28

2

hail-ci-test

3923 Libraries

star icon0

3

2232 Libraries

star icon604

4

collective

1263 Libraries

star icon3038

5

crowdbotics-dev

723 Libraries

star icon0

6

biomodels

690 Libraries

star icon1

7

codelivespeed

628 Libraries

star icon0

8

aws-samples

627 Libraries

star icon23550

9

openstack

550 Libraries

star icon33409

10

PacktPublishing

512 Libraries

star icon20636

1

4429 Libraries

star icon28

2

3923 Libraries

star icon0

3

2232 Libraries

star icon604

4

1263 Libraries

star icon3038

5

723 Libraries

star icon0

6

690 Libraries

star icon1

7

628 Libraries

star icon0

8

627 Libraries

star icon23550

9

550 Libraries

star icon33409

10

512 Libraries

star icon20636

Trending Kits in Python


Python is an object-oriented programming language. Python can do anything almost any other language can do, at comparable speeds. This kit has some simple to use exercises in Python to help someone new to learn how to program in Python and get started with their journey.

For a detailed tutorial on installing & executing the solution as well as learning resources including training & certification opportunities, please visit the OpenWeaver Community

Python Repositories with Basic Example Exercises

Basic python CLI programs as examples. This list has programs useful for someone who is a beginner and also someone willing to go advance level.

Some Python Games for Practice

Support

If you need help using this kit, you may reach us at the OpenWeaver Community.

kandi 1-Click Install


AI fake news detector helps detect fake news through binary classification methods. It helps build experiences by controlling the flow of disinformation. It's built on top of various powerful machine learning libraries. The tool works by training a neural network to spot fake articles based on their text content. When you run your own data through the tool, it gives you back a list of articles that it thinks are likely to be fake. You can then train the model further or decide if those results are acceptable or not. In addition to identifying fake news, this model can also be trained to identify real news. This allows you to compare the model's performance across different domains (e.g., politics vs. sports). The following installer and deployment instructions will walk you through the steps of creating an AI fake news detector by using fakenews-detection, jupyter, vscode, and pandas. We will use fake news detection libraries (having fully modifiable source code) to customize and build a simple classifier that can detect fake news articles. kandi kit provides you with a fully deployable AI Fake News Detector. Source code included so that you can customize it for your requirement.

With this kit, you can

1. Use a pre-trained model for detecting fake news.

2. Train the model on your custom dataset.

3. Expose the fake news detection as an API


Add-on on examples are also included as given below

1. Use web scraper to automatically make your training dataset.

2. Visualise training and prediction data for useful insights.

Instructions to Run

Follow the below instructions to run the solution.


1. Locate and open the FakeNewsDetection-starter.ipynb notebook from the Jupyter Notebook browser window.

2. Execute cells in the notebook by selecting Cell --> Run All from Menu bar

3. Once all the cells of the notebook are executed, the prediction result will be written to the file 'fake_news_test_output.csv'


Training with your dataset:

1. Add news articles to a csv file under a column name 'news_text'.

2. Add corresponding labels as 'real' or 'fake' denoting whether a news article is real or not.

3. You can refer to the file 'fake_news_train.csv' for an example.

4. Set the variable for training file in the notebook under Variables section.


Testing with your dataset:

1. Add news articles to a csv file under a column name 'news_text'.

2. You can refer to the file 'fake_news_test.csv' for an example.

3. Set the variable for testing file in the notebook under Variables section.


You can execute the cells of notebook by selecting Cell from the menu bar.


For any support, you can reach us at FAQ & Support

Libraries useful for this solution

Development Environment

VSCode and Jupyter Notebook are used for development and debugging. Jupyter Notebook is a web based interactive environment often used for experiments, whereas VSCode is used to get a typical experience of IDE for developers. Jupyter Notebook is used for our development.

Exploratory Data Analysis

For extensive analysis and exploration of data, and to deal with arrays, these libraries are used. They are also used for performing scientific computation and data manipulation.

Text mining

Libraries in this group are used for analysis and processing of unstructured natural language. The data, as in its original form aren't used as it has to go through processing pipeline to become suitable for applying machine learning techniques and algorithms.

Machine Learning

Machine learning libraries and frameworks here are helpful in providing state-of-the-art solutions using Machine learning.

Data Visualization

The patterns and relationships are identified by representing data visually and below libraries are used for generating visual plots of the data.

Troubleshooting

1. If you encounter any error related to MS Visual C++, please install MS Visual Build tools

2.While running batch file, if you encounter Windows protection alert, select More info --> Run anyway.

3.During kit installer, if you encounter Windows security alert, click Allow.

4. If you encounter Memory Error, check if the available memory is sufficient and it is proportion to the size of the data being used. For our dataset, the minimum required memory is 8GB.


If your computer doesn't support standard commands from windows 10, you can follow the instructions below to finish the kit installation.

1. Click here to install python

2. Click here to download the repository

3. Extract the zip file and navigate to the directory 'fakenews-detection-main'

4. Open terminal in the extracted directory 'fakenews-detection-main'

5. Install dependencies by executing the command 'pip install -r requirements.txt'

6. Run the command ‘jupyter notebook’ and select the notebook ‘FakeNewsdetection-starter.ipynb’ on the browser window.

Support

For any support, you can reach us at FAQ & Support

kandi 1-Click Install


Deepfake detection is identifying manipulated or synthetic media content using machine learning algorithms and computer vision techniques. It detects anomalies in facial and body movements, and other visual artifacts.


In this kit, we build a Deepfake Detection Engine using the popular Facenet_pytorch is a Python library that provides implementations of deep learning models for face recognition tasks. It includes pre-trained models such as


  1. MTCNN (Multi-Task Cascaded Convolutional Networks) for face detection and alignment, and
  2. InceptionResnetV1 for detecting whether an image is fake or real.


We use these two models to detect and recognize faces in images with high accuracy. The library is built on top of PyTorch, a popular open-source machine learning framework, and provides an easy-to-use API for face recognition tasks

Libraries used in this solution


Development Environment


VSCode and Jupyter Notebook are used for development and debugging. Jupyter Notebook is a web based interactive environment often used for experiments, whereas VSCode is used to get a typical experience of IDE for developers.


Jupyter Notebook is used for our development.

Machine Learning


Machine learning libraries and frameworks here are helpful in providing state-of-the-art solutions using Machine learning

Kit Solution Source


API Integration

Support


For any support, you can reach us at OpenWeaver Community Support

kandi 1-Click Install



Generative artificial intelligence (AI) describes algorithms that help in creating/generating new content, including audio, code, images, text and videos. 

 

In this kit, we build a real-time Voice-to-Image Generator using the concept of Generative AI. It is carried out in two steps:

 

  • Voice-to-text conversion - The speech is captured in real-time through the microphone and converted to text using state-of-the-art Opensource AI models from OpenAI and Whisper libraries.

 

  • Text to Image Generation - The converted text is provided as input to the state-of-the-art Image Generation models like Dalle-2, and the image is thus generated.

Libraries used in this solution


Development Environment


VSCode and Jupyter Notebook are used for development and debugging. Jupyter Notebook is a web based interactive environment often used for experiments, whereas VSCode is used to get a typical experience of IDE for developers.


Jupyter Notebook is used for our development.

Machine Learning


Machine learning libraries and frameworks here are helpful in providing state-of-the-art solutions using Machine learning

Kit Solution Source

UI App Integration

Support


For any support, you can reach us at OpenWeaver Community Support

kandi 1-Click Install


This Predictive Analytics kit provides an analytical view of students’ performance in mathematics and predicts grades to be scored in the final test.


The key features of this solution are:


  • Analysis of grades of students
  • Visualisation of patterns
  • Prediction of grade in the final test

For a detailed tutorial on installing & executing the solution as well as learning resources including training & certification opportunities, please visit the OpenWeaver Community

Development Environment

VSCode and Jupyter Notebook are used for development and debugging. Jupyter Notebook is a web based interactive environment often used for experiments, whereas VSCode is used to get a typical experience of IDE for developers. Jupyter Notebook is used for our development.

Data Mining

Our solution integrates data from various sources, and we have used below libraries for exploring patterns in these data and understanding correlation between the features.

Data Visualisation

The patterns and relationships are identified by representing data visually and below libraries are used for that.

Machine learning

Below libraries and model collections helps to create the machine learning models for the core prediction of use case in our solution.

Support

If you need help using this kit, you may reach us at the OpenWeaver Community.

kandi 1-Click Install

The use case of AI Course Recommender System is to provide personalized recommendation to the user based on their interest, course they can take and their current knowledge. This system will be able to recommend course based on user’s interest, current knowledge, analytical view of students’ performance in mathematics and recommends if a student can consider math subject for his/ her higher education. The recommended course will be based on the information of user’s profile, analysis of grades of students, visualization of patterns, prediction of grade in final test, and some rules that were set by their instructor. Using machine learning algorithms, we can train our model on a set of data and then predict the ratings for new items. This is all done in Python using numpy, pandas, matplotlib, scikit-learn and seaborn. kandi kit provides you with a fully deployable AI Course Recommender System. Source code included so that you can customize it for your requirement.

Development Environment

VSCode and Jupyter Notebook are used for development and debugging. Jupyter Notebook is a web based interactive environment often used for experiments, whereas VSCode is used to get a typical experience of IDE for developers.

Data Mining

Our solution integrates data from various sources, and we have used below libraries for exploring patterns in these data and understanding correlation between the features.

Data Visualisation

The patterns and relationships are identified by representing data visually and below libraries are used.

Machine learning

Below libraries and model collections helps to create the machine learning models for the core prediction of use case in our solution.


Federated Learning can train machine learning models on data from different hospitals, banks and autonomous vehicles without sharing sensitive data. But how do you create a Federated learning application? The answer is the kandi 1-click solution kit for Credit-risk-federated-learning.


Certainly, Federated Learning can be applied in the credit risk scenario to improve credit risk models' accuracy without compromising customer data privacy.


Banks collect and centralize customer data to train their credit risk models in the traditional approach. However, this approach can be challenging due to regulatory compliance, data privacy, and security concerns. Federated Learning addresses these challenges by allowing banks to train their credit risk models on customer data without transferring it to a centralized location.


This fully editable source code builds your Credit risk federated learning in minutes. The entire solution is available as a package to download from the source code repository.


Federated Learning in credit risk scenarios can have several benefits, including:


  • Improved accuracy: Federated Learning allows banks to train models on a larger and more diverse dataset, leading to better accuracy.
  • Data privacy: Federated Learning ensures that sensitive customer data is kept private and secure, which is critical in the context of credit risk.
  • Regulatory compliance: Federated Learning can help banks comply with regulations around data privacy and security.

Troubleshooting


  1. Install the Microsoft Visual C++ Redistributable for Visual Studio 2022 in case the kit doesn't successfully run on your windows system.
  2. In case, step 1 doesn't solve your issue, set up Microsoft build Tools.

For a detailed tutorial on installing & executing the solution as well as learning resources including training & certification opportunities, please visit the OpenWeaver Community

Development Environment

VSCode and Jupyter Notebook are used for development and debugging. Jupyter Notebook is a web based interactive environment often used for experiments, whereas VSCode is used to get a typical experience of IDE for developers. Jupyter Notebook is used for our development.

Data Pre-processing

Numpy and Pandas are powerful tools for data preprocessing in machine learning. They provide tools for handling missing data, feature scaling, one-hot encoding, data normalization, and transformation.

These tools can help you to prepare your data for machine learning and improve the performance of your models.

Machine learning

Scikit-learn is a powerful and versatile machine learning library in Python that provides a wide range of tools and algorithms for building and training machine learning models. It is widely used in academia and industry for various machine learning applications.

Federated Learning Framework

Flower is an open-source framework for Federated Learning that provides tools and APIs to simplify the development and deployment of Federated Learning models. Flower is designed to make it easier for developers to implement Federated Learning in their applications by providing a flexible and scalable platform for building and training models.

Kit Solution Source

Support

If you need help using this kit, you may reach us at the OpenWeaver Community.

kandi 1-Click Install


Large Language Models are foundation models that utilize deep learning in natural language processing and natural language generation tasks. Typically these models are trained on billions of parameters with a huge corpus of data.


GPT4all provides an ecosystem of open-source chatbots trained on a massive collections of clean assistant data including code, stories and dialogue. GPT4All is a 7B parameter LLM trained using a Low-Rank Adaptation (LoRA) method, yielding 430k post-processed instances, on a vast curated corpus of over 800k high-quality assistant interactions.


In this kit, we will use GPT4All to create a content generator, similar to ChatGPT, without the need for API keys and Internet to create content.

Libraries used in this solution

Development Environment


VSCode and Jupyter Notebook are used for development and debugging. Jupyter Notebook is a web based interactive environment often used for experiments, whereas VSCode is used to get a typical experience of IDE for developers.


Jupyter Notebook is used for our development.

Machine Learning


Machine learning libraries and frameworks here are helpful in providing state-of-the-art solutions using Machine learning

Kit Solution Source


API Integration


Support


For any support, you can reach us at OpenWeaver Community Support

kandi 1-Click Install


Angry Birds python game is a Finnish action-based media franchise. It is created by Rovio Entertainment.


The game series focuses on a flock of birds referred to by the same name who try to save their eggs from the green-colored pigs. Angry Birds game is written in python using Pygame and Pymunk which are open-source modules specifically intended to help you make games and other multimedia applications. Pygame can be used to load background images, sounds, and buttons, which makes the UI interactions more efficient. Pymunk is best when you need 2d physics from python which can be used for demos or simulations. Pymunk is basically built on top of the 2D physics library Chipmunk.

Troubleshooting

  1. While running batch file, if you encounter Windows protection alert, select More info --> Run anyway
  2. During kit installer, if you encounter Windows security alert, click Allow

For a detailed tutorial on installing & executing the solution as well as learning resources including training & certification opportunities, please visit the OpenWeaver Community

Development Environment

VSCode is used for development and debugging. VSCode is used to get a typical experience of IDE for developers.

Gaming Libraries

Pygame helps in providing computer graphics and audio libraries.

Pymunk is a easy-to-use pythonic 2d physics library that can be used whenever you need 2d rigid body physics from Python.

Support

If you need help using this kit, you may reach us at the OpenWeaver Community.

kandi 1-Click Install


A Tower Defense Game between Humans and Aliens. Kill as much as aliens as you can to upgrade and discover new humans.

Development Environment

VSCode is used for development and debugging. VSCode is used to get a typical experience of IDE for developers.

Gaming Libraries

Pygame helps in providing computer graphics and audio libraries.


Pymunk is an easy-to-use pythonic 2d physics library that can be used whenever you need 2d rigid body physics from Python.

Troubleshooting

  1. While running batch file, if you encounter Windows protection alert, select More info --> Run anyway
  2. During kit installer, if you encounter Windows security alert, click Allow

Support

If you need help to use this kit, you can email us at kandi.support@openweaver.com or direct message us on Twitter Message @OpenWeaverInc.

kandi 1-Click Install


Bias is prevalent in every aspect of our lives. Our brains are hardwired to categorize things we encounter in order to make sense of the complicated world around us. However, biases can cause us to form prejudices against others, which allows for egregious inequalities to form between different demographics.


While bias comes in many forms, bias words in writing is one form. Implicit bias in letter writing or evaluations negatively affects individuals at every stage of their career.


In this challenge, we are inviting to build a solution for detecting bias in writings such as letter of recommendations, Job Descriptions etc with respect to gender and race for promoting equity. You can choose any topic of your choice. The sample solution kit helps to detect gender bias.

Instruction to Run

Follow below instructions to run the solution.

  1. Locate and open the gender-bias.ipynb notebook from the Jupyter Notebook browser window.
  2. Execute cells in the notebook by selecting Cell --> Run All from Menu bar


For running it with your text,

  1. Open letterofRecW file from the location data/input from gender-bias.ipynb location.
  2. Update text in the letterofRecW file.
  3. Execute cells in the notebook by selecting Cell --> Run All from Menu bar.
  4. Output will be stored in a file gender-biased-words.txt in the location data/output. Output text is in json format. Output data format is: name - detector name. e.g. "Terms biased towards women" summary - summary of the detected bias flags - flag the detected bias words. e.g. "leader" You can additionally create your own detectors for race and dictionary dataset as well as other enhancements for additional score.

Troubleshooting

  1. While running batch file, if you encounter Windows protection alert, select More info --> Run anyway
  2. During kit installer, if you encounter Windows security alert, click Allow

For a detailed tutorial on installing & executing the solution as well as learning resources including training & certification opportunities, please visit the OpenWeaver Community

Development Environment

VSCode and Jupyter Notebook are used for development and debugging. Jupyter Notebook is a web based interactive environment often used for experiments, whereas VSCode is used to get a typical experience of IDE for developers. Jupyter Notebook is used for our development.

Text Mining

Libraries in this group are used for analysis and processing of unstructured natural language. The data, as in its original form aren't used as it has to go through processing pipeline to become suitable for applying machine learning techniques and algorithms.

Testing

The libraries listed here can be used for unit testing as well as integration testing

Support

If you need help using this kit, you may reach us at the OpenWeaver Community.

kandi 1-Click Install


The next word predictor is an exciting feature that helps you type faster on your mobile phone. It predicts the next word in the context you want to type. It is a very useful tool for people who type often and make mistakes while typing. It can be leveraged for auto-suggestion features in messenger and search engine apps.


The next word predictor makes it easy for readers to understand what exactly you are trying for them to read about.

  • Next word predictor is a very useful feature as it increases the readability of your content as well as makes it more understandable for readers.
  • Saves time by reducing the number of typos and grammatical errors in your content.
  • Modify source code to customize as per your requirements.

Instructions to Run

Follow the below instructions to run the solution.

  1. Locate and open the 'Next Word Predictor.ipynb' notebook from the Jupyter Notebook browser window.
  2. Execute cells in the notebook by selecting Cell --> Run All from the Menu bar.
  3. Once all the cells of the notebook are executed, the last interactive cell (Customisation) will be active, there we can give the input data or we can give the input text in the variable 'text_seq' under the variable section.


Input

text_seq = "I'm gonna make him an offer he can't"


Output

['refuse', 'resist', 'take', 'deny', 'get']

Troubleshooting

  1. If you encounter any error related to MS Visual C++, please install MS Visual Build tools
  2. While running batch file, if you encounter Windows protection alert, select More info --> Run anyway.
  3. During kit installer, if you encounter Windows security alert, click Allow.
  4. If you encounter Memory Error, check if the available memory is sufficient and it is proportion to the size of the data being used. For our dataset, the minimum required memory is 8GB.


If your computer doesn't support standard commands from windows 10, you can follow the instructions below to finish the kit installation.

  1. Install python
  2. Download the repository
  3. Extract the zip file and navigate to the directory 'next-word-prediction-main'
  4. Open terminal in the extracted directory 'next-word-prediction-main'
  5. Install dependencies by executing the command 'pip install -r requirements.txt'
  6. Run the command ‘jupyter notebook’ and select the notebook ‘Next Word Predictor.ipynb’ on the browser window.

For a detailed tutorial on installing & executing the solution as well as learning resources including training & certification opportunities, please visit the OpenWeaver Community

Development Environment

VSCode and Jupyter Notebook are used for development and debugging. Jupyter Notebook is a web-based interactive environment often used for experiments, whereas VSCode is used to get a typical experience of IDE for developers. Jupyter Notebook is used for our development.

Exploratory Data Analysis

For extensive analysis and exploration of data, and to deal with arrays, these libraries are used. They are also used for performing scientific computation and data manipulation.

Text Mining

Libraries in this group are used for analysis and processing of unstructured natural language.

Machine Learning

The library offers state-of-the-art pre-trained models for Natural Language Processing (NLP).

Support

If you need help using this kit, you may reach us at the OpenWeaver Community.

kandi 1-Click Install


AI-powered emoji detectors can help increase engagement with their customers. It will help them to build strong relationships with their customers. The emoji detector will help you in analyzing your audience and their preferences so that you can deliver the right content. You can also use the technology to provide customer support to your customers by providing customized answers.


One of the most important aspects of AI-Powered Emoji Detector is that it will help you in detecting any kind of emotions and expressions on your face OR hand gestures from a web camera. It will help in detecting whether you are happy, sad, or angry, and so on. This technology is also used for predicting different kinds of expressions like happiness, fear, sadness, etc.

For a detailed tutorial on installing & executing the solution as well as learning resources including training & certification opportunities, please visit the OpenWeaver Community

Development Environment

VSCode and Jupyter Notebook are used for development and debugging. Jupyter Notebook is a web based interactive environment often used for experiments, whereas VSCode is used to get a typical experience of IDE for developers. Jupyter Notebook is used for our development.

Image Preparation and Processing

These libraries help in preparing data by annotating and labelling images. Also processes images for running machine learning algorithm. We use opencv library for capturing frames from live streaming videocam.

Data Analysis/Manipulation

These libraries help in analyzing data and doing data manipulations.

Machine Learning

Below libraries and model collections helps to create the machine learning models for the core recognition use cases in our solution.

Utilities

The below utility library helps in storing huge amounts of numerical data and manipulate that data easily from NumPy.

Support

If you need help using this kit, you may reach us at the OpenWeaver Community.

kandi 1-Click Install


Real-time object tracking system is a technology used to track objects in real time. It can be used for security purposes or for commercial purposes. Tracking can be done for video formats and live streaming webcam.


The real-time object tracking system has many applications, such as in retail stores, airports, stadiums and other places where security is important. The system can be used to monitor customer activity in stores, track inventory and detect shoplifting. It can also be used to increase safety in public places by monitoring the movements of pedestrians or vehicles.

For a detailed tutorial on installing & executing the solution as well as learning resources including training & certification opportunities, please visit the OpenWeaver Community

Development Environment

VSCode and Jupyter Notebook can be used for development and debugging. Jupyter Notebook is a web-based interactive environment often used for experiments, whereas VSCode is used to get a typical experience of IDE for developers.

Object Detection and Tracking

The following libraries have a set of pre-trained models which could be used to identify objects and track them from live streaming videos.

Machine Learning Libraries

The following libraries could be used to create machine learning models which focus on the vision, extraction of data, image processing, and more. Thus making it handy for the users.

Support

If you need help using this kit, you may reach us at the OpenWeaver Community.

kandi 1-Click Install


Disease predictor is a way to recognize patient’s health by applying data mining and machine learning techniques on patient treatment history.


Symptoms, Diagnosis for Personalized Healthcare Services for a Predictive Analytic Perspective. Pandas library is used in this kandi kit to predict the probability of disease. The kit has used pandas to load datasets and visualize the data, NumPy to implement our algorithm, and sklearn-pandas to build our model.


In this project we will be using Pandas and Scikit-Learn to create a model that predicts whether or not a patient has a disease based on their demographics and lab results. We will also be using Jupyter Notebook to write code interactively so that we can see how our model performs when we change various parameters such as the number of features, amount of training data, etc.


kandi kit provides you with a fully deployable Disease Predictor. Source code included so that you can customize it for your requirement.

Development Environment

VSCode and Jupyter Notebook are used for development and debugging. Jupyter Notebook is a web based interactive environment often used for experiments, whereas VSCode is used to get a typical experience of IDE for developers. Jupyter Notebook is used for our development.

Exploratory Data Analysis

For extensive analysis and exploration of data, and to deal with arrays, these libraries are used. They are also used for performing scientific computation and data manipulation.

Data Visualization

The patterns and relationships are identified by representing data visually and below libraries are used for generating visual plots of the data.

Support

If you need help to use this kit, you can email us at kandi.support@openweaver.com or direct message us on Twitter Message @OpenWeaverInc .

kandi 1-Click Install


Super Mario is frequently cited as one of the greatest video games of all time. This kit is a classic remake of Super Mario Bros. developed in python. This kit includes only level 1-1.

Development Environment

VSCode is used for development, debugging and are used to get a typical experience of IDE for developers.

Gaming Libraries

Pygame helps in providing computer graphics and audio libraries. PyTMX helps to load maps for games.

Support

If you need help to use this kit, you can email us at kandi.support@openweaver.com or direct message us on Twitter Message @OpenWeaverInc.

kandi 1-Click Install

This is related to water treatment and purification.

DataViz Group

This helps with DaaViz

Cyber Data Analytics Group

This is for Cyber Data Analytics Group

zeronet group

this is zeronet group

Automation Group

this is part of Automation Group

Dilbert was dropped from hundreds of newspapers over Scott Adams’ racist comments. Multiple researchers have documented over the past few months how ChatGPT can be prompted to provide racist responses.


A three-decade globally famous comic strip has been canceled because of the creator’s racist comments in his YouTube show. ChatGPT, Bing Bot, and many such AI Bots are conversing with millions of users daily and have been documented to provide misleading, inaccurate, and biased responses. How can we hold AI to the same high standards we expect from society, especially when AI is now generative and scaled for global consumer use?



While no silver bullet exists, multiple aspects can make AI more responsible. Having open AI models is a great start. Hugging Face, EleutherAI, and many others are championing an open approach to AI. Openness and collaboration can bring in diverse contributions, reviews, and rigorous testing of AI models and help reduce bias.


NIST’s AI risk management guidelines released recently provide a comprehensive view across the AI lifecycle consisting of collecting and processing Data & Input, the build, and validation of the AI model, its deployment, and monitoring in the context of usage. Acknowledging the possibility of bias, eliminating data capture biases, or unconscious biases when generating synthetic data, designing for counterfactual fairness, and human-in-loop designs can reduce the risk of bias.

Use the below tools for assessment and to improve the fairness and robustness of your models.



Use the below tools for Explainability, Interpretability, and Monitoring.



Google toolkit on Tensorflow for Privacy, Federated Learning, and Explainability.


We can create any animation with suitable libraries or combinations of libraries that are well-known for their functionalities using Python Animation Libraries. For developers looking for options with less complex code with maximum customization options, users can customize their plots and designs depending on their preferences.  


Choosing a suitable library plays a key role in any machine learning or data science project, and we should do it properly to avoid other related issues which may arise. Some libraries offer an interactive plot to attract playing with the graph and visualize it uniquely. It will allow you to edit videos, create animations, and create a map or geological animations where we can analyze the geological data. 


Here is the list of handpicked 18 best Python Animation Libraries in 2023 which will help you with your animation requirements: 

manim - 3b1b

  • Is a Python library for creating mathematical animations and educational videos that Grant Sanderson develops.  
  • Is an open source library that allows users to create high-quality animations which visualize mathematical concepts like animations of graphs, functions, fractals, and more.   
  • Uses Python code for creating animations which we can export to animated GIFs or video files. 

PythonRobotics

  • Is a Python library for implementing different robotics simulations, visualizations, and algorithms. 
  • Offers various resources and tools for robotics developers, like algorithms for path planning, localization, motion control, mapping, and many more.   
  • Includes simulation environments like 2D and 3D simulators, that will allow developers to test their algorithms in virtual environments before deploying them on real robots.   

matplotlib

  • Is a comprehensive library for creating animated, interactive visualizations and static in Python. 
  • Produces publication-quality figures in different interactive environments and hardcopy formats across platforms. 
  • Can be used in Python Scripts, web application servers, various graphical user interface toolkits, and Python/IPython shells. 

manim - ManimCommunity

  • Is an animation engine for explanatory math videos used for programmatically creating precise animations. 
  • Includes various tools for creating animations like support for vector graphics, 3D objects, and complex mathematical equations. 
  • Also includes features for creating animations with custom fonts, styles, and colors. 

plotly.py 

  • Is a Python library used to create interactive data visualizations, built on the plotly JavaScript library that allows developers to create various interactive plots.   
  • Is designed to be easy to use and includes different resources and tools for creating high-quality visualizations. 
  • Includes support for complex data structures like pandas DataFrames and offers various customization options for fonts, styles, and colors. 

seaborn

  • Is a Python data visualization library based on Matplotlib, offering a high-level interface to create attractive and informative statistical graphics.  
  • Offers various plotting functions for visualizing various data types like continuous data, data distribution, and categorial data.  
  • Its the ability to create visually appealing plots with minimal effort and supports the customization of plot elements like axes, titles, legends, and labels. 

moviepy

  • Is a Python library for video editing, concatenations, cutting, video composting, title insertions, creation of custom effects, and video processing. 
  • Has the ability to add audio to video clips easily and offers various filters and audio effects like changing pitch and speed, adding sound effects, and adjusting volume.  
  • Includes support for creating animations like moving text, images, and shapes and allows users to export their video clips to different file formats. 

termtosvg 

  • Is a Python library that allows users to record terminal sessions and save them as SVG animations.  
  • Produces clean-looking and lightweight still frames embeddable on the project page or animations.  
  • Includes support for recording multiple terminal sessions, allowing users to control the size and speed of the resulting animation.   

altair

  • Is a declarative statistical visualization library that can help you spend more time understanding your data and its meaning. 
  • Offers a simple syntax for creating different visualizations, like line charts, histograms, scatterplots, and bar charts. 
  • Its declarative syntax lets user's express visualizations as a series of high-level mappings between visual and data properties like color, size, and position. 

PathPlanning

  • Is a Python library used for path and motion planning applications designed to be accessible to beginners and experts with a straightforward API. 
  • Offers various algorithms for computing collision-free paths for drones, mobile robots, and manipulators in 2D and 3D environments. 
  • Also offers tools for trajectory generation, motion control, and obstacle avoidance and supports simulation and visualization of robot motion. 

alive-progress 

  • Is a Python library for displaying spinners and progress bars in command-line applications designed to offer a customizable way of showing progress indicators for long-running processes or tasks. 
  • Supports for pausing and resuming progress indicators, nested spinners, and progress bars. 
  • Designed to be intuitive and simple with various default settings and a straightforward API for customizing the behavior and appearance of spinners and progress bars. 

asciimatics 

  • Is a package for helping people create full-screen text UIs on any platform and offers a single cross-platform Python class to do all the low-level console functions. 
  • Includes cursor positioning, mouse input, screen scraping, colored/styled text, detecting and handling if the console resizes, and keyboard input like Unicode support. 
  • Is a Python library for creating text-based animations and user interfaces in the terminal. 

pygal 

  • Is a Python library for creating interactive Scalable Vector Graphics (SVG) graphs and charts. 
  • Offers various tools for generating customizable charts, graphs, and high-quality for use in presentations, reports, and web applications. 
  • Includes built-in support for data/time axis labeling, responsive design, and integration with web frameworks and interactive charts elements.   

GANimation 

  • Is a Python implementation of the GANimation research project, which offers various tools for generating animations from still images using Generative Adversarial Networks (GANs).  
  • Includes tools for augmenting and preprocessing input data, customizable GAN training parameters and architecture, and support for evaluating and visualizing GAN models.  
  • Offers various tools for fine-tuning GAN models and generating high-quality animations for various applications. 

deep-motion-editing 

  • Offers advanced and fundamental functions to work with 3D character animations in deep learning with Pytorch. 
  • Is a Python implementation of the research project of the same name, which offers tools for editing the motion of human characters in video sequences using deep learning methods.  
  • Its ability to generate realistic, high-quality animations for various applications offers tools for fine-tuning the deep learning model and editing the generated motions to achieve the desired results. 

geoplotlib 

  • Is a Python library for creating geographical maps and visualizations and offers an easy-to-use interface for creating maps with different data types, like polygons, heatmaps, lines, and points. 
  • Includes support for different tile providers and map projections, customizable styling options for data layers like size, transparency, and color. 
  • Designed for creating interactive maps and visualizations and is suitable for various applications like data analysis, presentation, and exploration. 

Linux-Fake-Background-Webcam 

  • Is a Python library that will allow users to replace their webcam background with a custom video or image on Linux systems.  
  • Works by creating a virtual webcam device that can be selected as the input source in video conferencing applications, allowing users to appear as if they are in various environments and locations.   
  • Includes the ability to control the position and size of the custom background video or image and support for replacing the webcam background with a custom video or audio. 

celluloid 

  • Is a Python library that offers a simple interface for creating visualizations and animations in Matplotlib   
  • Designed to make it easy for users to create animations without having to write to deal with low-level details and complex code.  
  • Includes a simple interface for adding and updating data in the animation, the ability to save the animation as an MP4 or GIF video file, and support for customizing the animation style and appearance. 

FAQ

What are the best data visualizations for Python animation libraries?  

The Python Animation libraries create amazing visuals that can move and change. Here are the best data visualization libraries:  

  • Matplotlib  
  • Bokeh  
  • Plotly  
  • Pygal  
  • Plotnine  
  • Seaborn  
  • Holoviews  

  

Which animation library is most used by Python coders today?  

Matplotlib is a powerful 2D plotting library. It supports various visualizations. It is used in the scientific and data analysis communities. The 'FuncAnimation' class provides its animation capabilities. It allows coders to create dynamic and interactive visualizations. Its popularity is due to its advanced development, clear documentation, and reliability. Other higher-level visualization libraries use it as the backend.  

  

How can I create explanatory math videos using a Python animation library?  

You can use a Python Animation library to make math videos that show concepts visually. You should follow the below steps:  

  • Choose a Python animation library  
  • Plan your content  
  • Write the Python code  
  • Animate with time  
  • Narrate or annotate  
  • Export the video  
  • Edit and visualize  
  • Share your video  

  

What does the code for a basic Python animation look like?  

You can make a simple animation in Python with different libraries. Many people like using the matplotlib library. Matplotlib, a strong Python library, can make plots and do basic animations.   


Here's an example of a basic Python animation using matplotlib:   

import numpy as np   

import matplotlib.pyplot as plt   

from matplotlib.animation import FuncAnimation   

# Function to update the plot in each animation frame   

def update(frame):   

   # Clear the previous plot   

   plt.cla()   

   # Generate some data points for the animation   

   x = np.linspace(0, 2*np.pi, 100)   

   y = np.sin(x + 2*np.pi*frame/100)   

   # Plot the data   

   plt.plot(x, y)   

   plt.xlabel('X')   

   plt.ylabel('Y')   

   plt.title('Basic Python Animation')   

   plt.grid(True)   

# Create a blank figure   

fig, ax = plt.subplots()   

# Create the animation with the update function, 100 frames, and 100ms delay between frames   

animation = FuncAnimation(fig, update, frames=100, interval=100)    

# If you want to save the animation as a video file, you can use the following line:   

animation.save('basic_animation.mp4', writer='ffmpeg', fps=30)   

 # Display the animation   

plt.show()   

  

This code creates a simple animation that displays a sine wave. The update function makes new data points and updates the plot in each animation frame. The FuncAnimation class controls the animation. It calls the update function many times with different frame values.  

In this example, the animation has 100 frames with a delay of 100 milliseconds between frames.


To save the animation as a video file: 

  1. Remove the comment from the animation. 
  2. Save line. 
  3. Make sure you have ffmpeg installed. 
  4. Before running the code, ensure you have installed matplotlib in your Python setup. You can install it using pip install matplotlib.  

 

Using an animation library, how can you make line charts with various colors in Python?  

You can use different libraries in Python to make line charts with colors and animations. I will teach you how to use Matplotlib's FuncAnimation to animate graphs.   

  

Here's a step-by-step guide:   

#Install the required libraries (if you haven't already)   

pip install matplotlib   

# Import the necessary modules   

import numpy as np   

import matplotlib.pyplot as plt   

from matplotlib.animation import FuncAnimation   

# Generate your data: Create multiple datasets with different colors. For this example, let's consider two datasets, data1 and data2   

x = np.linspace(0, 10, 100)   

data1 = np.sin(x)   

data2 = np.cos(x)   

# Create a figure and an axis to plot the data   

fig, ax = plt.subplots()   

# Define the line objects for each dataset and set their properties   

line1, = ax.plot([], [], color='red', label='Data 1')   

line2, = ax.plot([], [], color='blue', label='Data 2')   

# Define the initialization function for the animation   

def init():   

   line1.set_data([], [])   

   line2.set_data([], [])   

   return line1, line2   

# Define the update function for the animation   

def update(frame):   

   line1.set_data(x[:frame], data1[:frame])   

   line2.set_data(x[:frame], data2[:frame])   

   return line1, line2   

# Create the animation using FuncAnimation   

frames = len(x)   

animation = FuncAnimation(fig, update, frames=frames, init_func=init, blit=True)   

# Display the animation or save it to a file (optional)   

plt.legend()   

plt.xlabel('X-axis')   

plt.ylabel('Y-axis')   

plt.title('Animated Line Chart with Different Colors')   

plt.show()   

   

This code makes a line chart that shows two datasets using different colors. You can customize the colors, data, and other properties per your requirements. To add more datasets, make new line objects and update their data in the update function.   

  

Can FuncAnimation be used to animate 3D objects and 2D shapes in Python?  

Yes, 'FuncAnimation' can animate both 3D objects and 2D shapes. But 'FuncAnimation' is a part of the Matplotlib library. It is primarily known for 2D plotting. Matplotlib's 3D plotting toolkit makes 3D objects, and visualizations come to life.  

  

Are there any tips or tricks to improve creating animations with Python libraries?  

To improve your animations, follow these helpful tips and tricks for efficient engineering. Here are some valuable tips to help you with it:  

  • Plan your animation  
  • Keep it simple  
  • Use subplots  
  • Choose the right library  
  • Optimize data processing  
  • Minimize redrawing  
  • Control animation speed  
  • Add labels and annotations  
  • User color thoughtfully  
  • Consider interactivity  
  • Test on a smaller subset  

  

Can I find open-source projects to practice coding with a Python animation library?  

Yes, there are many open-source projects available that use Python animation libraries. These resources help you practice animation libraries before starting your own project. Here are some places where you can find such projects:  

  • Matplotlib Examples Gallery  
  • GitHub Repositories  
  • Plotly Examples Gallery  
  • Kaggle Notebooks  
  • Bokeh Examples Gallery  
  • Data Science Blogs  
  • YouTube Tutorials 

Python Dashboard library offers graphs, maps, charts, and tables. Dashboards can be interactive by adding sliders, drop-down lists and buttons. 


It can update the visualizations dynamically. These libraries often offer options for customizing the dashboard's layout, styles, and colors to match specific design requirements. Dashboards can be deployed locally or on the web using a cloud-based platform or a built-in server. These dashboards can integrate with different data sources like APIs, spreadsheets, and databases, making it easier to update the data in real-time. Different users can share and access it through password-protected logins or public URLs. These libraries can come with extensive documentation and community support making it easier to get started and troubleshoot any issues.   


Here is the list of the top 17 Python Dashboard libraries that are handpicked to help developers: 

redash: 

  • Is an open source visualization and dashboard platform which will allow users to connect and visualize the data from different sources, like APIs, third-party services, and databases.  
  • Is a web-based platform that can be accessed through a browser and is built using JavaScript and Python.  
  • Offers a simple and intuitive interface to create and share data visualization, which can be customized to be suitable for individual requirements.   

plotly.py: 

  • Is a Python Data visualization that can be used for creating interactive, publication-quality graphs and charts.   
  • Allows the creation of interactive visualizations with hover, zoom, and click events, making it easy to explore and analyze data in real time.  
  • Allows customization of each aspect of a chart, like fonts, titles, colors, and axis labels.   

flask_jsondash: 

  • Is a flask extension to create dashboards and visualizations in Python designed to be customizable, allowing developers to create their own dashboard layouts and widgets.  
  • Create custom widgets that interact with the data in real-time, like drop-down lists, buttons, and sliders.  
  • Is a good choice for developers creating simple, lightweight dashboards, and visualizations in Python, without learning a more complex framework.   

wave: 

  • Is a Python library to build and deploy interactive, web-based dashboards for data exploration and visualization.   
  • Integrates seamlessly with H2O.ai's machine learning platform, allowing users to visualize and explore machine learning models.  
  • Offers features for sharing and collaboration, like the ability to share dashboards with others and collaborate on projects. 

psdash: 

  • Is a Python-based web dashboard for real-time monitoring of process statistics, system resource utilization, and other system-related information.   
  • Can be used for identifying and troubleshooting issues, optimizing system performance, and performance bottlenecks.  
  • Offers real-time updates of process and system statistics with the ability to refresh data at a customizable interval.   

panel: 

  • Is a Python library to create interactive web dashboards and applications and offers a high-level API.  
  • Supports different backends like Matplotlib, Holoviews, Bokeh, and Plotly, allowing developers to use their preferred plotting library.  
  • Offers reactive widgets that can update in real-time based on user input, allowing interactive and dynamic applications to be created. 

stashboard: 

  • Offers a user-friendly interface to monitor system health, uptime, and other key metrics, which can be used to notify users of system issues in real-time.  
  • Can be used for monitoring APIs, web services, and other software systems with support for SOAP, REST, and other protocols.  
  • Offers custom metrics support, allowing users to monitor system performance using their analytics tools and metrics.   

pygraphistry: 

  • Is a Python-based library to visualize large and complex datasets in interactive and visually appealing methods. 
  • Offers a graph-based visualization of data which is useful for visualizing connections and relationships between data points.  
  • Can be deployed to the cloud, allowing users to access their visualizations from anywhere.   

grafanalib: 

  • Is a Python library for programmatically creating dashboards in Grafana, an open source platform for monitoring and analytics.   
  • Allows developers to create and manage dashboards using Python code which can be version-controlled and automated.  
  • Supports macros and templates, allowing developers to create reusable components for their dashboards. 

flow-dashboard: 

  • Is designed to be used with the Flow framework, a web-based platform to build and deploy machine learning models.  
  • Can display data from various sources like APIs, streaming services, and databases.  
  • Offers built-in user management features allowing administrators to control access to data and dashboards. 

horizon: 

  • Is a Python library to build real-time monitoring systems and scalable dashboards.  
  • Offers real-time data processing capabilities, allowing users to filter, collect, and process data in real-time.   
  • Is designed to be highly scalable with support for distributed processing and horizontal scaling.   

graph-explorer: 

  • Is a Python-based library to build a dashboard to display data from different sources, like Prometheus, Elasticsearch, and Graphite.  
  • Allows users to create customizable dashboards with the support of various data sources and visualizations.  
  • Offers advanced querying capabilities, allowing users to filter and search data.   

django-controlcenter: 

  • Is a Python-based library to build reusable and customizable dashboards in Django-based web applications.  
  • Allows developers to create dashboards that display data from different sources like APIs, Django models, and other data sources.  
  • Offers integration with Django models, allowing developers to display data from their database in their dashboards.   

changes: 

  • Is a Python library that offers an easy-to-use interface to monitor file system events like creation, editing, and deletion.  
  • Allows developers to create applications that monitor directories and respond to real-time changes.  
  • Is a useful tool for creating applications that can monitor file system events in real-time with various integrations and features, making it suited for various use cases. 

bowtie: 

  • Is a bioinformatics software tool to align short DNA sequences to large reference genomes.   
  • Allows developers to easily create dashboards that display data from different sources like SQL databases, APIs, and CSV files.  
  • Offers support for interactive visualizations, like graphs, maps, and charts.   

socialsentiment: 

  • Is designed for sentiment analysis of social media data like comments or tweets on online platforms. 
  • Uses machine learning algorithms for classifying text as negative, positive, and neutral based on the sentiment expressed in the text. 
  • Offers a pre-trained sentiment analysis model which can be trained on a larger corpus of social media data.  

dashboard-api-python: 

  • Is a Python library for the Google Analytics Dashboard API which will allow developers to access and retrieve Google Analytics data programmatically using Python. 
  • Is designed to make it easier for developers to question and manipulate data in Google Analytics without requiring the API details or how to construct API calls. 
  • Includes creating and updating dashboards, managing data sources, and functions for querying data.  

Trending Discussions on Python

Python/Docker ImportError: cannot import name 'json' from itsdangerous

Why is it faster to compare strings that match than strings that do not?

Why is `np.sum(range(N))` very slow?

Error while downloading the requirements using pip install (setup command: use_2to3 is invalid.)

Repeatedly removing the maximum average subarray

WARNING: Running pip as the 'root' user

How do I calculate square root in Python?

pip-compile raising AssertionError on its logging handler

ImportError: cannot import name 'url' from 'django.conf.urls' after upgrading to Django 4.0

How did print(*a, a.pop(0)) change?

QUESTION

Python/Docker ImportError: cannot import name 'json' from itsdangerous

Asked 2022-Mar-31 at 12:49

I am trying to get a Flask and Docker application to work but when I try and run it using my docker-compose up command in my Visual Studio terminal, it gives me an ImportError called ImportError: cannot import name 'json' from itsdangerous. I have tried to look for possible solutions to this problem but as of right now there are not many on here or anywhere else. The only two solutions I could find are to change the current installation of MarkupSafe and itsdangerous to a higher version: https://serverfault.com/questions/1094062/from-itsdangerous-import-json-as-json-importerror-cannot-import-name-json-fr and another one on GitHub that tells me to essentially change the MarkUpSafe and itsdangerous installation again https://github.com/aws/aws-sam-cli/issues/3661, I have also tried to make a virtual environment named veganetworkscriptenv to install the packages but that has also failed as well. I am currently using Flask 2.0.0 and Docker 5.0.0 and the error occurs on line eight in vegamain.py.

Here is the full ImportError that I get when I try and run the program:

1veganetworkscript-backend-1  | Traceback (most recent call last):
2veganetworkscript-backend-1  |   File "/app/vegamain.py", line 8, in <module>
3veganetworkscript-backend-1  |     from flask import Flask
4veganetworkscript-backend-1  |   File "/usr/local/lib/python3.9/site-packages/flask/__init__.py", line 19, in <module>
5veganetworkscript-backend-1  |     from . import json
6veganetworkscript-backend-1  |   File "/usr/local/lib/python3.9/site-packages/flask/json/__init__.py", line 15, in <module>
7veganetworkscript-backend-1  |     from itsdangerous import json as _json
8veganetworkscript-backend-1  | ImportError: cannot import name 'json' from 'itsdangerous' (/usr/local/lib/python3.9/site-packages/itsdangerous/__init__.py)
9veganetworkscript-backend-1 exited with code 1
10

Here are my requirements.txt, vegamain.py, Dockerfile, and docker-compose.yml files:

requirements.txt:

1veganetworkscript-backend-1  | Traceback (most recent call last):
2veganetworkscript-backend-1  |   File "/app/vegamain.py", line 8, in <module>
3veganetworkscript-backend-1  |     from flask import Flask
4veganetworkscript-backend-1  |   File "/usr/local/lib/python3.9/site-packages/flask/__init__.py", line 19, in <module>
5veganetworkscript-backend-1  |     from . import json
6veganetworkscript-backend-1  |   File "/usr/local/lib/python3.9/site-packages/flask/json/__init__.py", line 15, in <module>
7veganetworkscript-backend-1  |     from itsdangerous import json as _json
8veganetworkscript-backend-1  | ImportError: cannot import name 'json' from 'itsdangerous' (/usr/local/lib/python3.9/site-packages/itsdangerous/__init__.py)
9veganetworkscript-backend-1 exited with code 1
10Flask==2.0.0
11Flask-SQLAlchemy==2.4.4
12SQLAlchemy==1.3.20
13Flask-Migrate==2.5.3
14Flask-Script==2.0.6
15Flask-Cors==3.0.9
16requests==2.25.0
17mysqlclient==2.0.1
18pika==1.1.0
19wolframalpha==4.3.0
20

vegamain.py:

1veganetworkscript-backend-1  | Traceback (most recent call last):
2veganetworkscript-backend-1  |   File "/app/vegamain.py", line 8, in <module>
3veganetworkscript-backend-1  |     from flask import Flask
4veganetworkscript-backend-1  |   File "/usr/local/lib/python3.9/site-packages/flask/__init__.py", line 19, in <module>
5veganetworkscript-backend-1  |     from . import json
6veganetworkscript-backend-1  |   File "/usr/local/lib/python3.9/site-packages/flask/json/__init__.py", line 15, in <module>
7veganetworkscript-backend-1  |     from itsdangerous import json as _json
8veganetworkscript-backend-1  | ImportError: cannot import name 'json' from 'itsdangerous' (/usr/local/lib/python3.9/site-packages/itsdangerous/__init__.py)
9veganetworkscript-backend-1 exited with code 1
10Flask==2.0.0
11Flask-SQLAlchemy==2.4.4
12SQLAlchemy==1.3.20
13Flask-Migrate==2.5.3
14Flask-Script==2.0.6
15Flask-Cors==3.0.9
16requests==2.25.0
17mysqlclient==2.0.1
18pika==1.1.0
19wolframalpha==4.3.0
20# Veganetwork (C) TetraSystemSolutions 2022
21# all rights are reserved.  
22# 
23# Author: Trevor R. Blanchard Feb-19-2022-Jul-30-2022
24#
25
26# get our imports in order first
27from flask import Flask # <-- error occurs here!!!
28
29# start the application through flask.
30app = Flask(__name__)
31
32# if set to true will return only a "Hello World" string.
33Debug = True
34
35# start a route to the index part of the app in flask.
36@app.route('/')
37def index():
38    if (Debug == True):
39        return 'Hello World!'
40    else:
41        pass
42
43# start the flask app here --->
44if __name__ == '__main__':
45    app.run(debug=True, host='0.0.0.0') 
46

Dockerfile:

1veganetworkscript-backend-1  | Traceback (most recent call last):
2veganetworkscript-backend-1  |   File "/app/vegamain.py", line 8, in <module>
3veganetworkscript-backend-1  |     from flask import Flask
4veganetworkscript-backend-1  |   File "/usr/local/lib/python3.9/site-packages/flask/__init__.py", line 19, in <module>
5veganetworkscript-backend-1  |     from . import json
6veganetworkscript-backend-1  |   File "/usr/local/lib/python3.9/site-packages/flask/json/__init__.py", line 15, in <module>
7veganetworkscript-backend-1  |     from itsdangerous import json as _json
8veganetworkscript-backend-1  | ImportError: cannot import name 'json' from 'itsdangerous' (/usr/local/lib/python3.9/site-packages/itsdangerous/__init__.py)
9veganetworkscript-backend-1 exited with code 1
10Flask==2.0.0
11Flask-SQLAlchemy==2.4.4
12SQLAlchemy==1.3.20
13Flask-Migrate==2.5.3
14Flask-Script==2.0.6
15Flask-Cors==3.0.9
16requests==2.25.0
17mysqlclient==2.0.1
18pika==1.1.0
19wolframalpha==4.3.0
20# Veganetwork (C) TetraSystemSolutions 2022
21# all rights are reserved.  
22# 
23# Author: Trevor R. Blanchard Feb-19-2022-Jul-30-2022
24#
25
26# get our imports in order first
27from flask import Flask # <-- error occurs here!!!
28
29# start the application through flask.
30app = Flask(__name__)
31
32# if set to true will return only a "Hello World" string.
33Debug = True
34
35# start a route to the index part of the app in flask.
36@app.route('/')
37def index():
38    if (Debug == True):
39        return 'Hello World!'
40    else:
41        pass
42
43# start the flask app here --->
44if __name__ == '__main__':
45    app.run(debug=True, host='0.0.0.0') 
46FROM python:3.9
47ENV PYTHONUNBUFFERED 1
48WORKDIR /app
49COPY requirements.txt /app/requirements.txt
50RUN pip install -r requirements.txt
51COPY . /app
52

docker-compose.yml:

1veganetworkscript-backend-1  | Traceback (most recent call last):
2veganetworkscript-backend-1  |   File "/app/vegamain.py", line 8, in <module>
3veganetworkscript-backend-1  |     from flask import Flask
4veganetworkscript-backend-1  |   File "/usr/local/lib/python3.9/site-packages/flask/__init__.py", line 19, in <module>
5veganetworkscript-backend-1  |     from . import json
6veganetworkscript-backend-1  |   File "/usr/local/lib/python3.9/site-packages/flask/json/__init__.py", line 15, in <module>
7veganetworkscript-backend-1  |     from itsdangerous import json as _json
8veganetworkscript-backend-1  | ImportError: cannot import name 'json' from 'itsdangerous' (/usr/local/lib/python3.9/site-packages/itsdangerous/__init__.py)
9veganetworkscript-backend-1 exited with code 1
10Flask==2.0.0
11Flask-SQLAlchemy==2.4.4
12SQLAlchemy==1.3.20
13Flask-Migrate==2.5.3
14Flask-Script==2.0.6
15Flask-Cors==3.0.9
16requests==2.25.0
17mysqlclient==2.0.1
18pika==1.1.0
19wolframalpha==4.3.0
20# Veganetwork (C) TetraSystemSolutions 2022
21# all rights are reserved.  
22# 
23# Author: Trevor R. Blanchard Feb-19-2022-Jul-30-2022
24#
25
26# get our imports in order first
27from flask import Flask # <-- error occurs here!!!
28
29# start the application through flask.
30app = Flask(__name__)
31
32# if set to true will return only a "Hello World" string.
33Debug = True
34
35# start a route to the index part of the app in flask.
36@app.route('/')
37def index():
38    if (Debug == True):
39        return 'Hello World!'
40    else:
41        pass
42
43# start the flask app here --->
44if __name__ == '__main__':
45    app.run(debug=True, host='0.0.0.0') 
46FROM python:3.9
47ENV PYTHONUNBUFFERED 1
48WORKDIR /app
49COPY requirements.txt /app/requirements.txt
50RUN pip install -r requirements.txt
51COPY . /app
52version: '3.8'
53services:
54  backend:
55    build:
56      context: .
57      dockerfile: Dockerfile
58    command: 'python vegamain.py'
59    ports:
60      - 8004:5000
61    volumes:
62      - .:/app
63    depends_on:
64      - db
65
66#  queue:
67#    build:
68#      context: .
69#      dockerfile: Dockerfile
70#    command: 'python -u consumer.py'
71#    depends_on:
72#      - db
73
74  db:
75    image: mysql:5.7.22
76    restart: always
77    environment:
78      MYSQL_DATABASE: admin
79      MYSQL_USER: root
80      MYSQL_PASSWORD: root
81      MYSQL_ROOT_PASSWORD: root
82    volumes:
83      - .dbdata:/var/lib/mysql
84    ports:
85      - 33069:3306
86

How exactly can I fix this code? thank you!

ANSWER

Answered 2022-Feb-20 at 12:31

I was facing the same issue while running docker containers with flask.

I downgraded Flask to 1.1.4 and markupsafe to 2.0.1 which solved my issue.

Check this for reference.

Source https://stackoverflow.com/questions/71189819

QUESTION

Why is it faster to compare strings that match than strings that do not?

Asked 2022-Mar-30 at 11:58

Here are two measurements:

1timeit.timeit('"toto"=="1234"', number=100000000)
21.8320042459999968
3timeit.timeit('"toto"=="toto"', number=100000000)
41.4517491540000265
5

As you can see, comparing two strings that match is faster than comparing two strings with the same size that do not match. This is quite disturbing: During a string comparison, I believed that Python was testing strings character by character, so "toto"=="toto" should be longer to test than "toto"=="1234" as it requires four tests against one for the non-matching comparison. Maybe the comparison is hash-based, but in this case, timings should be the same for both comparisons.

Why?

ANSWER

Answered 2022-Mar-30 at 11:57

Combining my comment and the comment by @khelwood:

TL;DR:
When analysing the bytecode for the two comparisons, it reveals the 'time' and 'time' strings are assigned to the same object. Therefore, an up-front identity check (at C-level) is the reason for the increased comparison speed.

The reason for the same object assignment is that, as an implementation detail, CPython interns strings which contain only 'name characters' (i.e. alpha and underscore characters). This enables the object's identity check.


Bytecode:

1timeit.timeit('"toto"=="1234"', number=100000000)
21.8320042459999968
3timeit.timeit('"toto"=="toto"', number=100000000)
41.4517491540000265
5import dis
6
7In [24]: dis.dis("'time'=='time'")
8  1           0 LOAD_CONST               0 ('time')  # <-- same object (0)
9              2 LOAD_CONST               0 ('time')  # <-- same object (0)
10              4 COMPARE_OP               2 (==)
11              6 RETURN_VALUE
12
13In [25]: dis.dis("'time'=='1234'")
14  1           0 LOAD_CONST               0 ('time')  # <-- different object (0)
15              2 LOAD_CONST               1 ('1234')  # <-- different object (1)
16              4 COMPARE_OP               2 (==)
17              6 RETURN_VALUE
18

Assignment Timing:

The 'speed-up' can also be seen in using assignment for the time tests. The assignment (and compare) of two variables to the same string, is faster than the assignment (and compare) of two variables to different strings. Further supporting the hypothesis the underlying logic is performing an object comparison. This is confirmed in the next section.

1timeit.timeit('"toto"=="1234"', number=100000000)
21.8320042459999968
3timeit.timeit('"toto"=="toto"', number=100000000)
41.4517491540000265
5import dis
6
7In [24]: dis.dis("'time'=='time'")
8  1           0 LOAD_CONST               0 ('time')  # <-- same object (0)
9              2 LOAD_CONST               0 ('time')  # <-- same object (0)
10              4 COMPARE_OP               2 (==)
11              6 RETURN_VALUE
12
13In [25]: dis.dis("'time'=='1234'")
14  1           0 LOAD_CONST               0 ('time')  # <-- different object (0)
15              2 LOAD_CONST               1 ('1234')  # <-- different object (1)
16              4 COMPARE_OP               2 (==)
17              6 RETURN_VALUE
18In [26]: timeit.timeit("x='time'; y='time'; x==y", number=1000000)
19Out[26]: 0.0745926329982467
20
21In [27]: timeit.timeit("x='time'; y='1234'; x==y", number=1000000)
22Out[27]: 0.10328884399496019
23

Python source code:

As helpfully provided by @mkrieger1 and @Masklinn in their comments, the source code for unicodeobject.c performs a pointer comparison first and if True, returns immediately.

1timeit.timeit('"toto"=="1234"', number=100000000)
21.8320042459999968
3timeit.timeit('"toto"=="toto"', number=100000000)
41.4517491540000265
5import dis
6
7In [24]: dis.dis("'time'=='time'")
8  1           0 LOAD_CONST               0 ('time')  # <-- same object (0)
9              2 LOAD_CONST               0 ('time')  # <-- same object (0)
10              4 COMPARE_OP               2 (==)
11              6 RETURN_VALUE
12
13In [25]: dis.dis("'time'=='1234'")
14  1           0 LOAD_CONST               0 ('time')  # <-- different object (0)
15              2 LOAD_CONST               1 ('1234')  # <-- different object (1)
16              4 COMPARE_OP               2 (==)
17              6 RETURN_VALUE
18In [26]: timeit.timeit("x='time'; y='time'; x==y", number=1000000)
19Out[26]: 0.0745926329982467
20
21In [27]: timeit.timeit("x='time'; y='1234'; x==y", number=1000000)
22Out[27]: 0.10328884399496019
23int
24_PyUnicode_Equal(PyObject *str1, PyObject *str2)
25{
26    assert(PyUnicode_CheckExact(str1));
27    assert(PyUnicode_CheckExact(str2));
28    if (str1 == str2) {                  // <-- Here
29        return 1;
30    }
31    if (PyUnicode_READY(str1) || PyUnicode_READY(str2)) {
32        return -1;
33    }
34    return unicode_compare_eq(str1, str2);
35}
36

Appendix:

  • Reference answer nicely illustrating how to read the disassembled bytecode output. Courtesy of @Delgan
  • Reference answer which nicely describes CPython's string interning. Coutresy of @ShadowRanger

Source https://stackoverflow.com/questions/71644405

QUESTION

Why is `np.sum(range(N))` very slow?

Asked 2022-Mar-29 at 14:31

I saw a video about speed of loops in python, where it was explained that doing sum(range(N)) is much faster than manually looping through range and adding the variables together, since the former runs in C due to built-in functions being used, while in the latter the summation is done in (slow) python. I was curious what happens when adding numpy to the mix. As I expected np.sum(np.arange(N)) is the fastest, but sum(np.arange(N)) and np.sum(range(N)) are even slower than doing the naive for loop.

Why is this?

Here's the script I used to test, some comments about the supposed cause of slowing done where I know (taken mostly from the video) and the results I got on my machine (python 3.10.0, numpy 1.21.2):

updated script:

1import numpy as np
2from timeit import timeit
3
4N = 10_000_000
5repetition = 10
6
7def sum0(N = N):
8    s = 0
9    i = 0
10    while i < N: # condition is checked in python
11        s += i
12        i += 1 # both additions are done in python
13    return s
14
15def sum1(N = N):
16    s = 0
17    for i in range(N): # increment in C
18        s += i # addition in python
19    return s
20
21def sum2(N = N):
22    return sum(range(N)) # everything in C
23
24def sum3(N = N):
25    return sum(list(range(N)))
26
27def sum4(N = N):
28    return np.sum(range(N)) # very slow np.array conversion
29
30def sum5(N = N):
31    # much faster np.array conversion
32    return np.sum(np.fromiter(range(N),dtype = int))
33
34def sum5v2_(N = N):
35    # much faster np.array conversion
36    return np.sum(np.fromiter(range(N),dtype = np.int_))
37
38def sum6(N = N):
39    # possibly slow conversion to Py_long from np.int
40    return sum(np.arange(N))
41
42def sum7(N = N):
43    # list returns a list of np.int-s
44    return sum(list(np.arange(N)))
45
46def sum7v2(N = N):
47    # tolist conversion to python int seems faster than the implicit conversion
48    # in sum(list()) (tolist returns a list of python int-s)
49    return sum(np.arange(N).tolist())
50
51def sum8(N = N):
52    return np.sum(np.arange(N)) # everything in numpy (fortran libblas?)
53
54def sum9(N = N):
55    return np.arange(N).sum() # remove dispatch overhead
56
57def array_basic(N = N):
58    return np.array(range(N))
59
60def array_dtype(N = N):
61    return np.array(range(N),dtype = np.int_)
62
63def array_iter(N = N):
64    # np.sum's source code mentions to use fromiter to convert from generators
65    return np.fromiter(range(N),dtype = np.int_)
66
67print(f"while loop:         {timeit(sum0, number = repetition)}")
68print(f"for loop:           {timeit(sum1, number = repetition)}")
69print(f"sum_range:          {timeit(sum2, number = repetition)}")
70print(f"sum_rangelist:      {timeit(sum3, number = repetition)}")
71print(f"npsum_range:        {timeit(sum4, number = repetition)}")
72print(f"npsum_iterrange:    {timeit(sum5, number = repetition)}")
73print(f"npsum_iterrangev2:  {timeit(sum5, number = repetition)}")
74print(f"sum_arange:         {timeit(sum6, number = repetition)}")
75print(f"sum_list_arange:    {timeit(sum7, number = repetition)}")
76print(f"sum_arange_tolist:  {timeit(sum7v2, number = repetition)}")
77print(f"npsum_arange:       {timeit(sum8, number = repetition)}")
78print(f"nparangenpsum:      {timeit(sum9, number = repetition)}")
79print(f"array_basic:        {timeit(array_basic, number = repetition)}")
80print(f"array_dtype:        {timeit(array_dtype, number = repetition)}")
81print(f"array_iter:         {timeit(array_iter,  number = repetition)}")
82
83print(f"npsumarangeREP:     {timeit(lambda : sum8(N/1000), number = 100000*repetition)}")
84print(f"npsumarangeREP:     {timeit(lambda : sum9(N/1000), number = 100000*repetition)}")
85
86# Example output:
87#
88# while loop:         11.493371912998555
89# for loop:           7.385945574002108
90# sum_range:          2.4605720699983067
91# sum_rangelist:      4.509678105998319
92# npsum_range:        11.85120212900074
93# npsum_iterrange:    4.464334709002287
94# npsum_iterrangev2:  4.498494338993623
95# sum_arange:         9.537815956995473
96# sum_list_arange:    13.290120724996086
97# sum_arange_tolist:  5.231948580003518
98# npsum_arange:       0.241889145996538
99# nparangenpsum:      0.21876695199898677
100# array_basic:        11.736577274998126
101# array_dtype:        8.71628468400013
102# array_iter:         4.303306431000237
103# npsumarangeREP:     21.240833958996518
104# npsumarangeREP:     16.690092379001726
105
106

ANSWER

Answered 2021-Oct-16 at 17:42

From the cpython source code for sum sum initially seems to attempt a fast path that assumes all inputs are the same type. If that fails it will just iterate:

1import numpy as np
2from timeit import timeit
3
4N = 10_000_000
5repetition = 10
6
7def sum0(N = N):
8    s = 0
9    i = 0
10    while i < N: # condition is checked in python
11        s += i
12        i += 1 # both additions are done in python
13    return s
14
15def sum1(N = N):
16    s = 0
17    for i in range(N): # increment in C
18        s += i # addition in python
19    return s
20
21def sum2(N = N):
22    return sum(range(N)) # everything in C
23
24def sum3(N = N):
25    return sum(list(range(N)))
26
27def sum4(N = N):
28    return np.sum(range(N)) # very slow np.array conversion
29
30def sum5(N = N):
31    # much faster np.array conversion
32    return np.sum(np.fromiter(range(N),dtype = int))
33
34def sum5v2_(N = N):
35    # much faster np.array conversion
36    return np.sum(np.fromiter(range(N),dtype = np.int_))
37
38def sum6(N = N):
39    # possibly slow conversion to Py_long from np.int
40    return sum(np.arange(N))
41
42def sum7(N = N):
43    # list returns a list of np.int-s
44    return sum(list(np.arange(N)))
45
46def sum7v2(N = N):
47    # tolist conversion to python int seems faster than the implicit conversion
48    # in sum(list()) (tolist returns a list of python int-s)
49    return sum(np.arange(N).tolist())
50
51def sum8(N = N):
52    return np.sum(np.arange(N)) # everything in numpy (fortran libblas?)
53
54def sum9(N = N):
55    return np.arange(N).sum() # remove dispatch overhead
56
57def array_basic(N = N):
58    return np.array(range(N))
59
60def array_dtype(N = N):
61    return np.array(range(N),dtype = np.int_)
62
63def array_iter(N = N):
64    # np.sum's source code mentions to use fromiter to convert from generators
65    return np.fromiter(range(N),dtype = np.int_)
66
67print(f"while loop:         {timeit(sum0, number = repetition)}")
68print(f"for loop:           {timeit(sum1, number = repetition)}")
69print(f"sum_range:          {timeit(sum2, number = repetition)}")
70print(f"sum_rangelist:      {timeit(sum3, number = repetition)}")
71print(f"npsum_range:        {timeit(sum4, number = repetition)}")
72print(f"npsum_iterrange:    {timeit(sum5, number = repetition)}")
73print(f"npsum_iterrangev2:  {timeit(sum5, number = repetition)}")
74print(f"sum_arange:         {timeit(sum6, number = repetition)}")
75print(f"sum_list_arange:    {timeit(sum7, number = repetition)}")
76print(f"sum_arange_tolist:  {timeit(sum7v2, number = repetition)}")
77print(f"npsum_arange:       {timeit(sum8, number = repetition)}")
78print(f"nparangenpsum:      {timeit(sum9, number = repetition)}")
79print(f"array_basic:        {timeit(array_basic, number = repetition)}")
80print(f"array_dtype:        {timeit(array_dtype, number = repetition)}")
81print(f"array_iter:         {timeit(array_iter,  number = repetition)}")
82
83print(f"npsumarangeREP:     {timeit(lambda : sum8(N/1000), number = 100000*repetition)}")
84print(f"npsumarangeREP:     {timeit(lambda : sum9(N/1000), number = 100000*repetition)}")
85
86# Example output:
87#
88# while loop:         11.493371912998555
89# for loop:           7.385945574002108
90# sum_range:          2.4605720699983067
91# sum_rangelist:      4.509678105998319
92# npsum_range:        11.85120212900074
93# npsum_iterrange:    4.464334709002287
94# npsum_iterrangev2:  4.498494338993623
95# sum_arange:         9.537815956995473
96# sum_list_arange:    13.290120724996086
97# sum_arange_tolist:  5.231948580003518
98# npsum_arange:       0.241889145996538
99# nparangenpsum:      0.21876695199898677
100# array_basic:        11.736577274998126
101# array_dtype:        8.71628468400013
102# array_iter:         4.303306431000237
103# npsumarangeREP:     21.240833958996518
104# npsumarangeREP:     16.690092379001726
105
106/* Fast addition by keeping temporary sums in C instead of new Python objects.
107   Assumes all inputs are the same type.  If the assumption fails, default
108   to the more general routine.
109*/
110

I'm not entirely certain what is happening under the hood, but it is likely the repeated creation/conversion of C types to Python objects that is causing these slow-downs. It's worth noting that both sum and range are implemented in C.


This next bit is not really an answer to the question, but I wondered if we could speed up sum for python ranges as range is quite a smart object.

To do this I've used functools.singledispatch to override the built-in sum function specifically for the range type; then implemented a small function to calculate the sum of an arithmetic progression.

1import numpy as np
2from timeit import timeit
3
4N = 10_000_000
5repetition = 10
6
7def sum0(N = N):
8    s = 0
9    i = 0
10    while i < N: # condition is checked in python
11        s += i
12        i += 1 # both additions are done in python
13    return s
14
15def sum1(N = N):
16    s = 0
17    for i in range(N): # increment in C
18        s += i # addition in python
19    return s
20
21def sum2(N = N):
22    return sum(range(N)) # everything in C
23
24def sum3(N = N):
25    return sum(list(range(N)))
26
27def sum4(N = N):
28    return np.sum(range(N)) # very slow np.array conversion
29
30def sum5(N = N):
31    # much faster np.array conversion
32    return np.sum(np.fromiter(range(N),dtype = int))
33
34def sum5v2_(N = N):
35    # much faster np.array conversion
36    return np.sum(np.fromiter(range(N),dtype = np.int_))
37
38def sum6(N = N):
39    # possibly slow conversion to Py_long from np.int
40    return sum(np.arange(N))
41
42def sum7(N = N):
43    # list returns a list of np.int-s
44    return sum(list(np.arange(N)))
45
46def sum7v2(N = N):
47    # tolist conversion to python int seems faster than the implicit conversion
48    # in sum(list()) (tolist returns a list of python int-s)
49    return sum(np.arange(N).tolist())
50
51def sum8(N = N):
52    return np.sum(np.arange(N)) # everything in numpy (fortran libblas?)
53
54def sum9(N = N):
55    return np.arange(N).sum() # remove dispatch overhead
56
57def array_basic(N = N):
58    return np.array(range(N))
59
60def array_dtype(N = N):
61    return np.array(range(N),dtype = np.int_)
62
63def array_iter(N = N):
64    # np.sum's source code mentions to use fromiter to convert from generators
65    return np.fromiter(range(N),dtype = np.int_)
66
67print(f"while loop:         {timeit(sum0, number = repetition)}")
68print(f"for loop:           {timeit(sum1, number = repetition)}")
69print(f"sum_range:          {timeit(sum2, number = repetition)}")
70print(f"sum_rangelist:      {timeit(sum3, number = repetition)}")
71print(f"npsum_range:        {timeit(sum4, number = repetition)}")
72print(f"npsum_iterrange:    {timeit(sum5, number = repetition)}")
73print(f"npsum_iterrangev2:  {timeit(sum5, number = repetition)}")
74print(f"sum_arange:         {timeit(sum6, number = repetition)}")
75print(f"sum_list_arange:    {timeit(sum7, number = repetition)}")
76print(f"sum_arange_tolist:  {timeit(sum7v2, number = repetition)}")
77print(f"npsum_arange:       {timeit(sum8, number = repetition)}")
78print(f"nparangenpsum:      {timeit(sum9, number = repetition)}")
79print(f"array_basic:        {timeit(array_basic, number = repetition)}")
80print(f"array_dtype:        {timeit(array_dtype, number = repetition)}")
81print(f"array_iter:         {timeit(array_iter,  number = repetition)}")
82
83print(f"npsumarangeREP:     {timeit(lambda : sum8(N/1000), number = 100000*repetition)}")
84print(f"npsumarangeREP:     {timeit(lambda : sum9(N/1000), number = 100000*repetition)}")
85
86# Example output:
87#
88# while loop:         11.493371912998555
89# for loop:           7.385945574002108
90# sum_range:          2.4605720699983067
91# sum_rangelist:      4.509678105998319
92# npsum_range:        11.85120212900074
93# npsum_iterrange:    4.464334709002287
94# npsum_iterrangev2:  4.498494338993623
95# sum_arange:         9.537815956995473
96# sum_list_arange:    13.290120724996086
97# sum_arange_tolist:  5.231948580003518
98# npsum_arange:       0.241889145996538
99# nparangenpsum:      0.21876695199898677
100# array_basic:        11.736577274998126
101# array_dtype:        8.71628468400013
102# array_iter:         4.303306431000237
103# npsumarangeREP:     21.240833958996518
104# npsumarangeREP:     16.690092379001726
105
106/* Fast addition by keeping temporary sums in C instead of new Python objects.
107   Assumes all inputs are the same type.  If the assumption fails, default
108   to the more general routine.
109*/
110from functools import singledispatch
111
112def sum_range(range_, /, start=0):
113    """Overloaded `sum` for range, compute arithmetic sum"""
114    n = len(range_)
115    if not n:
116        return start
117    return int(start + (n * (range_[0] + range_[-1]) / 2))
118
119sum = singledispatch(sum)
120sum.register(range, sum_range)
121
122def test():
123    """
124    >>> sum(range(0, 100))
125    4950
126    >>> sum(range(0, 10, 2))
127    20
128    >>> sum(range(0, 9, 2))
129    20
130    >>> sum(range(0, -10, -1))
131    -45
132    >>> sum(range(-10, 10))
133    -10
134    >>> sum(range(-1, -100, -2))
135    -2500
136    >>> sum(range(0, 10, 100))
137    0
138    >>> sum(range(0, 0))
139    0
140    >>> sum(range(0, 100), 50)
141    5000
142    >>> sum(range(0, 0), 10)
143    10
144    """
145
146if __name__ == "__main__":
147    import doctest
148    doctest.testmod()
149

I'm not sure if this is complete, but it's definitely faster than looping.

Source https://stackoverflow.com/questions/69584027

QUESTION

Error while downloading the requirements using pip install (setup command: use_2to3 is invalid.)

Asked 2022-Mar-05 at 07:13

version pip 21.2.4 python 3.6

The command:

1pip install -r  requirments.txt
2

The content of my requirements.txt:

1pip install -r  requirments.txt
2mongoengine==0.19.1
3numpy==1.16.2
4pylint
5pandas==1.1.5
6fawkes
7

The command is failing with this error

1pip install -r  requirments.txt
2mongoengine==0.19.1
3numpy==1.16.2
4pylint
5pandas==1.1.5
6fawkes
7ERROR: Command errored out with exit status 1:
8     command: /Users/*/Desktop/ml/*/venv/bin/python -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/kn/0y92g7x55qs7c42tln4gwhtm0000gp/T/pip-install-soh30mel/mongoengine_89e68f8427244f1bb3215b22f77a619c/setup.py'"'"'; __file__='"'"'/private/var/folders/kn/0y92g7x55qs7c42tln4gwhtm0000gp/T/pip-install-soh30mel/mongoengine_89e68f8427244f1bb3215b22f77a619c/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /private/var/folders/kn/0y92g7x55qs7c42tln4gwhtm0000gp/T/pip-pip-egg-info-97994d6e
9         cwd: /private/var/folders/kn/0y92g7x55qs7c42tln4gwhtm0000gp/T/pip-install-soh30mel/mongoengine_89e68f8427244f1bb3215b22f77a619c/
10    Complete output (1 lines):
11    error in mongoengine setup command: use_2to3 is invalid.
12    ----------------------------------------
13WARNING: Discarding https://*/pypi/packages/mongoengine-0.19.1.tar.gz#md5=68e613009f6466239158821a102ac084 (from https://*/pypi/simple/mongoengine/). Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
14ERROR: Could not find a version that satisfies the requirement mongoengine==0.19.1 (from versions: 0.15.0, 0.19.1)
15ERROR: No matching distribution found for mongoengine==0.19.1
16

ANSWER

Answered 2021-Nov-19 at 13:30

It looks like setuptools>=58 breaks support for use_2to3:

setuptools changelog for v58

So you should update setuptools to setuptools<58 or avoid using packages with use_2to3 in the setup parameters.

I was having the same problem, pip==19.3.1

Source https://stackoverflow.com/questions/69100275

QUESTION

Repeatedly removing the maximum average subarray

Asked 2022-Feb-28 at 18:19

I have an array of positive integers. For example:

1[1, 7, 8, 4, 2, 1, 4]
2

A "reduction operation" finds the array prefix with the highest average, and deletes it. Here, an array prefix means a contiguous subarray whose left end is the start of the array, such as [1] or [1, 7] or [1, 7, 8] above. Ties are broken by taking the longer prefix.

1[1, 7, 8, 4, 2, 1, 4]
2Original array:  [  1,   7,   8,   4,   2,   1,   4]
3
4Prefix averages: [1.0, 4.0, 5.3, 5.0, 4.4, 3.8, 3.9]
5
6-&gt; Delete [1, 7, 8], with maximum average 5.3
7-&gt; New array -&gt; [4, 2, 1, 4]
8

I will repeat the reduction operation until the array is empty:

1[1, 7, 8, 4, 2, 1, 4]
2Original array:  [  1,   7,   8,   4,   2,   1,   4]
3
4Prefix averages: [1.0, 4.0, 5.3, 5.0, 4.4, 3.8, 3.9]
5
6-&gt; Delete [1, 7, 8], with maximum average 5.3
7-&gt; New array -&gt; [4, 2, 1, 4]
8[1, 7, 8, 4, 2, 1, 4]
9^       ^
10[4, 2, 1, 4]
11^ ^
12[2, 1, 4]
13^       ^
14[]
15

Now, actually performing these array modifications isn't necessary; I'm only looking for the list of lengths of prefixes that would be deleted by this process, for example, [3, 1, 3] above.

What is an efficient algorithm for computing these prefix lengths?


The naive approach is to recompute all sums and averages from scratch in every iteration for an O(n^2) algorithm-- I've attached Python code for this below. I'm looking for any improvement on this approach-- most preferably, any solution below O(n^2), but an algorithm with the same complexity but better constant factors would also be helpful.

Here are a few of the things I've tried (without success):

  1. Dynamically maintaining prefix sums, for example with a Binary Indexed Tree. While I can easily update prefix sums or find a maximum prefix sum in O(log n) time, I haven't found any data structure which can update the average, as the denominator in the average is changing.
  2. Reusing the previous 'rankings' of prefix averages-- these rankings can change, e.g. in some array, the prefix ending at index 5 may have a larger average than the prefix ending at index 6, but after removing the first 3 elements, now the prefix ending at index 2 may have a smaller average than the one ending at 3.
  3. Looking for patterns in where prefixes end; for example, the rightmost element of any max average prefix is always a local maximum in the array, but it's not clear how much this helps.

This is a working Python implementation of the naive, quadratic method:

1[1, 7, 8, 4, 2, 1, 4]
2Original array:  [  1,   7,   8,   4,   2,   1,   4]
3
4Prefix averages: [1.0, 4.0, 5.3, 5.0, 4.4, 3.8, 3.9]
5
6-&gt; Delete [1, 7, 8], with maximum average 5.3
7-&gt; New array -&gt; [4, 2, 1, 4]
8[1, 7, 8, 4, 2, 1, 4]
9^       ^
10[4, 2, 1, 4]
11^ ^
12[2, 1, 4]
13^       ^
14[]
15from fractions import Fraction
16def find_array_reductions(nums: List[int]) -&gt; List[int]:
17    &quot;&quot;&quot;Return list of lengths of max average prefix reductions.&quot;&quot;&quot;
18
19    def max_prefix_avg(arr: List[int]) -&gt; Tuple[float, int]:
20        &quot;&quot;&quot;Return value and length of max average prefix in arr.&quot;&quot;&quot;
21        if len(arr) == 0:
22            return (-math.inf, 0)
23
24        best_length = 1
25        best_average = Fraction(0, 1)
26        running_sum = 0
27
28        for i, x in enumerate(arr, 1):
29            running_sum += x
30            new_average = Fraction(running_sum, i)
31            if new_average &gt;= best_average:
32                best_average = new_average
33                best_length = i
34
35        return (float(best_average), best_length)
36
37    removed_lengths = []
38    total_removed = 0
39
40    while total_removed &lt; len(nums):
41        _, new_removal = max_prefix_avg(nums[total_removed:])
42        removed_lengths.append(new_removal)
43        total_removed += new_removal
44
45    return removed_lengths
46

Edit: The originally published code had a rare error with large inputs from using Python's math.isclose() with default parameters for floating point comparison, rather than proper fraction comparison. This has been fixed in the current code. An example of the error can be found at this Try it online link, along with a foreword explaining exactly what causes this bug, if you're curious.

ANSWER

Answered 2022-Feb-27 at 22:44

This problem has a fun O(n) solution.

If you draw a graph of cumulative sum vs index, then:

The average value in the subarray between any two indexes is the slope of the line between those points on the graph.

The first highest-average-prefix will end at the point that makes the highest angle from 0. The next highest-average-prefix must then have a smaller average, and it will end at the point that makes the highest angle from the first ending. Continuing to the end of the array, we find that...

These segments of highest average are exactly the segments in the upper convex hull of the cumulative sum graph.

Find these segments using the monotone chain algorithm. Since the points are already sorted, it takes O(n) time.

1[1, 7, 8, 4, 2, 1, 4]
2Original array:  [  1,   7,   8,   4,   2,   1,   4]
3
4Prefix averages: [1.0, 4.0, 5.3, 5.0, 4.4, 3.8, 3.9]
5
6-&gt; Delete [1, 7, 8], with maximum average 5.3
7-&gt; New array -&gt; [4, 2, 1, 4]
8[1, 7, 8, 4, 2, 1, 4]
9^       ^
10[4, 2, 1, 4]
11^ ^
12[2, 1, 4]
13^       ^
14[]
15from fractions import Fraction
16def find_array_reductions(nums: List[int]) -&gt; List[int]:
17    &quot;&quot;&quot;Return list of lengths of max average prefix reductions.&quot;&quot;&quot;
18
19    def max_prefix_avg(arr: List[int]) -&gt; Tuple[float, int]:
20        &quot;&quot;&quot;Return value and length of max average prefix in arr.&quot;&quot;&quot;
21        if len(arr) == 0:
22            return (-math.inf, 0)
23
24        best_length = 1
25        best_average = Fraction(0, 1)
26        running_sum = 0
27
28        for i, x in enumerate(arr, 1):
29            running_sum += x
30            new_average = Fraction(running_sum, i)
31            if new_average &gt;= best_average:
32                best_average = new_average
33                best_length = i
34
35        return (float(best_average), best_length)
36
37    removed_lengths = []
38    total_removed = 0
39
40    while total_removed &lt; len(nums):
41        _, new_removal = max_prefix_avg(nums[total_removed:])
42        removed_lengths.append(new_removal)
43        total_removed += new_removal
44
45    return removed_lengths
46# Lengths of the segments in the upper convex hull
47# of the cumulative sum graph
48def upperSumHullLengths(arr):
49    if len(arr) &lt; 2:
50        if len(arr) &lt; 1:
51            return []
52        else:
53            return [1]
54    
55    hull = [(0, 0),(1, arr[0])]
56    for x in range(2, len(arr)+1):
57        # this has x coordinate x-1
58        prevPoint = hull[len(hull) - 1]
59        # next point in cumulative sum
60        point = (x, prevPoint[1] + arr[x-1])
61        # remove points not on the convex hull
62        while len(hull) &gt;= 2:
63            p0 = hull[len(hull)-2]
64            dx0 = prevPoint[0] - p0[0]
65            dy0 = prevPoint[1] - p0[1]
66            dx1 = x - prevPoint[0]
67            dy1 = point[1] - prevPoint[1]
68            if dy1*dx0 &lt; dy0*dx1:
69                break
70            hull.pop()
71            prevPoint = p0
72        hull.append(point)
73    
74    return [hull[i+1][0] - hull[i][0] for i in range(0, len(hull)-1)]
75
76
77print(upperSumHullLengths([  1,   7,   8,   4,   2,   1,   4]))
78

prints:

1[1, 7, 8, 4, 2, 1, 4]
2Original array:  [  1,   7,   8,   4,   2,   1,   4]
3
4Prefix averages: [1.0, 4.0, 5.3, 5.0, 4.4, 3.8, 3.9]
5
6-&gt; Delete [1, 7, 8], with maximum average 5.3
7-&gt; New array -&gt; [4, 2, 1, 4]
8[1, 7, 8, 4, 2, 1, 4]
9^       ^
10[4, 2, 1, 4]
11^ ^
12[2, 1, 4]
13^       ^
14[]
15from fractions import Fraction
16def find_array_reductions(nums: List[int]) -&gt; List[int]:
17    &quot;&quot;&quot;Return list of lengths of max average prefix reductions.&quot;&quot;&quot;
18
19    def max_prefix_avg(arr: List[int]) -&gt; Tuple[float, int]:
20        &quot;&quot;&quot;Return value and length of max average prefix in arr.&quot;&quot;&quot;
21        if len(arr) == 0:
22            return (-math.inf, 0)
23
24        best_length = 1
25        best_average = Fraction(0, 1)
26        running_sum = 0
27
28        for i, x in enumerate(arr, 1):
29            running_sum += x
30            new_average = Fraction(running_sum, i)
31            if new_average &gt;= best_average:
32                best_average = new_average
33                best_length = i
34
35        return (float(best_average), best_length)
36
37    removed_lengths = []
38    total_removed = 0
39
40    while total_removed &lt; len(nums):
41        _, new_removal = max_prefix_avg(nums[total_removed:])
42        removed_lengths.append(new_removal)
43        total_removed += new_removal
44
45    return removed_lengths
46# Lengths of the segments in the upper convex hull
47# of the cumulative sum graph
48def upperSumHullLengths(arr):
49    if len(arr) &lt; 2:
50        if len(arr) &lt; 1:
51            return []
52        else:
53            return [1]
54    
55    hull = [(0, 0),(1, arr[0])]
56    for x in range(2, len(arr)+1):
57        # this has x coordinate x-1
58        prevPoint = hull[len(hull) - 1]
59        # next point in cumulative sum
60        point = (x, prevPoint[1] + arr[x-1])
61        # remove points not on the convex hull
62        while len(hull) &gt;= 2:
63            p0 = hull[len(hull)-2]
64            dx0 = prevPoint[0] - p0[0]
65            dy0 = prevPoint[1] - p0[1]
66            dx1 = x - prevPoint[0]
67            dy1 = point[1] - prevPoint[1]
68            if dy1*dx0 &lt; dy0*dx1:
69                break
70            hull.pop()
71            prevPoint = p0
72        hull.append(point)
73    
74    return [hull[i+1][0] - hull[i][0] for i in range(0, len(hull)-1)]
75
76
77print(upperSumHullLengths([  1,   7,   8,   4,   2,   1,   4]))
78[3, 1, 3]
79

Source https://stackoverflow.com/questions/71287550

QUESTION

WARNING: Running pip as the 'root' user

Asked 2022-Feb-24 at 01:59

I am making simple image of my python Django app in Docker. But at the end of the building container it throws next warning (I am building it on Ubuntu 20.04):

1WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead
2

Why does it throw this warning if I am installing Python requirements inside my image? I am building my image using:

1WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead
2sudo docker build -t my_app:1 .
3

Should I be worried about warning that pip throws, because I know it can break my system?

Here is my Dockerfile:

1WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead
2sudo docker build -t my_app:1 .
3FROM python:3.8-slim-buster
4
5WORKDIR /app
6
7COPY requirements.txt requirements.txt
8
9RUN pip install -r requirements.txt
10
11COPY . .
12
13CMD [&quot;python&quot;, &quot;manage.py&quot;, &quot;runserver&quot;, &quot;0.0.0.0:8000&quot;]
14

ANSWER

Answered 2021-Aug-29 at 08:12

The way your container is built doesn't add a user, so everything is done as root.

You could create a user and install to that users's home directory by doing something like this;

1WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead
2sudo docker build -t my_app:1 .
3FROM python:3.8-slim-buster
4
5WORKDIR /app
6
7COPY requirements.txt requirements.txt
8
9RUN pip install -r requirements.txt
10
11COPY . .
12
13CMD [&quot;python&quot;, &quot;manage.py&quot;, &quot;runserver&quot;, &quot;0.0.0.0:8000&quot;]
14FROM python:3.8.3-alpine
15
16RUN pip install --upgrade pip
17
18RUN adduser -D myuser
19USER myuser
20WORKDIR /home/myuser
21
22COPY --chown=myuser:myuser requirements.txt requirements.txt
23RUN pip install --user -r requirements.txt
24
25ENV PATH=&quot;/home/myuser/.local/bin:${PATH}&quot;
26
27COPY --chown=myuser:myuser . .
28
29CMD [&quot;python&quot;, &quot;manage.py&quot;, &quot;runserver&quot;, &quot;0.0.0.0:8000&quot;]
30

Source https://stackoverflow.com/questions/68673221

QUESTION

How do I calculate square root in Python?

Asked 2022-Feb-17 at 03:40

I need to calculate the square root of some numbers, for example √9 = 3 and √2 = 1.4142. How can I do it in Python?

The inputs will probably be all positive integers, and relatively small (say less than a billion), but just in case they're not, is there anything that might break?


Related

Note: This is an attempt at a canonical question after a discussion on Meta about an existing question with the same title.

ANSWER

Answered 2022-Feb-04 at 19:44
Option 1: math.sqrt()

The math module from the standard library has a sqrt function to calculate the square root of a number. It takes any type that can be converted to float (which includes int) as an argument and returns a float.

1&gt;&gt;&gt; import math
2&gt;&gt;&gt; math.sqrt(9)
33.0
4
Option 2: Fractional exponent

The power operator (**) or the built-in pow() function can also be used to calculate a square root. Mathematically speaking, the square root of a equals a to the power of 1/2.

The power operator requires numeric types and matches the conversion rules for binary arithmetic operators, so in this case it will return either a float or a complex number.

1&gt;&gt;&gt; import math
2&gt;&gt;&gt; math.sqrt(9)
33.0
4&gt;&gt;&gt; 9 ** (1/2)
53.0
6&gt;&gt;&gt; 9 ** .5  # Same thing
73.0
8&gt;&gt;&gt; 2 ** .5
91.4142135623730951
10

(Note: in Python 2, 1/2 is truncated to 0, so you have to force floating point arithmetic with 1.0/2 or similar. See Why does Python give the "wrong" answer for square root?)

This method can be generalized to nth root, though fractions that can't be exactly represented as a float (like 1/3 or any denominator that's not a power of 2) may cause some inaccuracy:

1&gt;&gt;&gt; import math
2&gt;&gt;&gt; math.sqrt(9)
33.0
4&gt;&gt;&gt; 9 ** (1/2)
53.0
6&gt;&gt;&gt; 9 ** .5  # Same thing
73.0
8&gt;&gt;&gt; 2 ** .5
91.4142135623730951
10&gt;&gt;&gt; 8 ** (1/3)
112.0
12&gt;&gt;&gt; 125 ** (1/3)
134.999999999999999
14
Edge cases Negative and complex

Exponentiation works with negative numbers and complex numbers, though the results have some slight inaccuracy:

1&gt;&gt;&gt; import math
2&gt;&gt;&gt; math.sqrt(9)
33.0
4&gt;&gt;&gt; 9 ** (1/2)
53.0
6&gt;&gt;&gt; 9 ** .5  # Same thing
73.0
8&gt;&gt;&gt; 2 ** .5
91.4142135623730951
10&gt;&gt;&gt; 8 ** (1/3)
112.0
12&gt;&gt;&gt; 125 ** (1/3)
134.999999999999999
14&gt;&gt;&gt; (-25) ** .5  # Should be 5j
15(3.061616997868383e-16+5j)
16&gt;&gt;&gt; 8j ** .5  # Should be 2+2j
17(2.0000000000000004+2j)
18

Note the parentheses on -25! Otherwise it's parsed as -(25**.5) because exponentiation is more tightly binding than unary negation.

Meanwhile, math is only built for floats, so for x<0, math.sqrt() will raise ValueError: math domain error and for complex x, it'll raise TypeError: can't convert complex to float. Instead, you can use cmath.sqrt(), which is more more accurate than exponentiation (and will likely be faster too):

1&gt;&gt;&gt; import math
2&gt;&gt;&gt; math.sqrt(9)
33.0
4&gt;&gt;&gt; 9 ** (1/2)
53.0
6&gt;&gt;&gt; 9 ** .5  # Same thing
73.0
8&gt;&gt;&gt; 2 ** .5
91.4142135623730951
10&gt;&gt;&gt; 8 ** (1/3)
112.0
12&gt;&gt;&gt; 125 ** (1/3)
134.999999999999999
14&gt;&gt;&gt; (-25) ** .5  # Should be 5j
15(3.061616997868383e-16+5j)
16&gt;&gt;&gt; 8j ** .5  # Should be 2+2j
17(2.0000000000000004+2j)
18&gt;&gt;&gt; import cmath
19&gt;&gt;&gt; cmath.sqrt(-25)
205j
21&gt;&gt;&gt; cmath.sqrt(8j)
22(2+2j)
23
Precision

Both options involve an implicit conversion to float, so floating point precision is a factor. For example:

1&gt;&gt;&gt; import math
2&gt;&gt;&gt; math.sqrt(9)
33.0
4&gt;&gt;&gt; 9 ** (1/2)
53.0
6&gt;&gt;&gt; 9 ** .5  # Same thing
73.0
8&gt;&gt;&gt; 2 ** .5
91.4142135623730951
10&gt;&gt;&gt; 8 ** (1/3)
112.0
12&gt;&gt;&gt; 125 ** (1/3)
134.999999999999999
14&gt;&gt;&gt; (-25) ** .5  # Should be 5j
15(3.061616997868383e-16+5j)
16&gt;&gt;&gt; 8j ** .5  # Should be 2+2j
17(2.0000000000000004+2j)
18&gt;&gt;&gt; import cmath
19&gt;&gt;&gt; cmath.sqrt(-25)
205j
21&gt;&gt;&gt; cmath.sqrt(8j)
22(2+2j)
23&gt;&gt;&gt; n = 10**30
24&gt;&gt;&gt; square = n**2
25&gt;&gt;&gt; x = square**.5
26&gt;&gt;&gt; x == n
27False
28&gt;&gt;&gt; x - n  # how far off are they?
290.0
30&gt;&gt;&gt; int(x) - n  # how far off is the float from the int?
3119884624838656
32

Very large numbers might not even fit in a float and you'll get OverflowError: int too large to convert to float. See Python sqrt limit for very large numbers?

Other types

Let's look at Decimal for example:

Exponentiation fails unless the exponent is also Decimal:

1&gt;&gt;&gt; import math
2&gt;&gt;&gt; math.sqrt(9)
33.0
4&gt;&gt;&gt; 9 ** (1/2)
53.0
6&gt;&gt;&gt; 9 ** .5  # Same thing
73.0
8&gt;&gt;&gt; 2 ** .5
91.4142135623730951
10&gt;&gt;&gt; 8 ** (1/3)
112.0
12&gt;&gt;&gt; 125 ** (1/3)
134.999999999999999
14&gt;&gt;&gt; (-25) ** .5  # Should be 5j
15(3.061616997868383e-16+5j)
16&gt;&gt;&gt; 8j ** .5  # Should be 2+2j
17(2.0000000000000004+2j)
18&gt;&gt;&gt; import cmath
19&gt;&gt;&gt; cmath.sqrt(-25)
205j
21&gt;&gt;&gt; cmath.sqrt(8j)
22(2+2j)
23&gt;&gt;&gt; n = 10**30
24&gt;&gt;&gt; square = n**2
25&gt;&gt;&gt; x = square**.5
26&gt;&gt;&gt; x == n
27False
28&gt;&gt;&gt; x - n  # how far off are they?
290.0
30&gt;&gt;&gt; int(x) - n  # how far off is the float from the int?
3119884624838656
32&gt;&gt;&gt; decimal.Decimal('9') ** .5
33Traceback (most recent call last):
34  File &quot;&lt;stdin&gt;&quot;, line 1, in &lt;module&gt;
35TypeError: unsupported operand type(s) for ** or pow(): 'decimal.Decimal' and 'float'
36&gt;&gt;&gt; decimal.Decimal('9') ** decimal.Decimal('.5')
37Decimal('3.000000000000000000000000000')
38

Meanwhile, math and cmath will silently convert their arguments to float and complex respectively, which could mean loss of precision.

decimal also has its own .sqrt(). See also calculating n-th roots using Python 3's decimal module

Source https://stackoverflow.com/questions/70793490

QUESTION

pip-compile raising AssertionError on its logging handler

Asked 2022-Feb-13 at 12:37

I have a dockerfile that currently only installs pip-tools

1FROM python:3.9
2
3RUN pip install --upgrade pip &amp;&amp; \
4    pip install pip-tools
5
6COPY ./ /root/project
7
8WORKDIR /root/project
9
10ENTRYPOINT [&quot;tail&quot;, &quot;-f&quot;, &quot;/dev/null&quot;]
11

I build and open a shell in the container using the following commands:

1FROM python:3.9
2
3RUN pip install --upgrade pip &amp;&amp; \
4    pip install pip-tools
5
6COPY ./ /root/project
7
8WORKDIR /root/project
9
10ENTRYPOINT [&quot;tail&quot;, &quot;-f&quot;, &quot;/dev/null&quot;]
11docker build -t brunoapi_image .
12docker run --rm -ti --name brunoapi_container --entrypoint bash brunoapi_image
13

Then, when I try to run pip-compile inside the container I get this very weird error (full traceback):

1FROM python:3.9
2
3RUN pip install --upgrade pip &amp;&amp; \
4    pip install pip-tools
5
6COPY ./ /root/project
7
8WORKDIR /root/project
9
10ENTRYPOINT [&quot;tail&quot;, &quot;-f&quot;, &quot;/dev/null&quot;]
11docker build -t brunoapi_image .
12docker run --rm -ti --name brunoapi_container --entrypoint bash brunoapi_image
13root@727f1f38f095:~/project# pip-compile
14Traceback (most recent call last):
15  File &quot;/usr/local/bin/pip-compile&quot;, line 8, in &lt;module&gt;
16    sys.exit(cli())
17  File &quot;/usr/local/lib/python3.9/site-packages/click/core.py&quot;, line 1128, in __call__
18    return self.main(*args, **kwargs)
19  File &quot;/usr/local/lib/python3.9/site-packages/click/core.py&quot;, line 1053, in main
20    rv = self.invoke(ctx)
21  File &quot;/usr/local/lib/python3.9/site-packages/click/core.py&quot;, line 1395, in invoke
22    return ctx.invoke(self.callback, **ctx.params)
23  File &quot;/usr/local/lib/python3.9/site-packages/click/core.py&quot;, line 754, in invoke
24    return __callback(*args, **kwargs)
25  File &quot;/usr/local/lib/python3.9/site-packages/click/decorators.py&quot;, line 26, in new_func
26    return f(get_current_context(), *args, **kwargs)
27  File &quot;/usr/local/lib/python3.9/site-packages/piptools/scripts/compile.py&quot;, line 342, in cli
28    repository = PyPIRepository(pip_args, cache_dir=cache_dir)
29  File &quot;/usr/local/lib/python3.9/site-packages/piptools/repositories/pypi.py&quot;, line 106, in __init__
30    self._setup_logging()
31  File &quot;/usr/local/lib/python3.9/site-packages/piptools/repositories/pypi.py&quot;, line 455, in _setup_logging
32    assert isinstance(handler, logging.StreamHandler)
33AssertionError
34

I have no clue what's going on and I've never seen this error before. Can anyone shed some light into this?

Running on macOS Monterey

ANSWER

Answered 2022-Feb-05 at 16:30

It is a bug, you can downgrade using:

pip install "pip<22"

https://github.com/jazzband/pip-tools/issues/1558

Source https://stackoverflow.com/questions/70946286

QUESTION

ImportError: cannot import name 'url' from 'django.conf.urls' after upgrading to Django 4.0

Asked 2022-Feb-10 at 21:14

After upgrading to Django 4.0, I get the following error when running python manage.py runserver

1  ...
2  File &quot;/path/to/myproject/myproject/urls.py&quot;, line 16, in &lt;module&gt;
3    from django.conf.urls import url
4ImportError: cannot import name 'url' from 'django.conf.urls' (/path/to/my/venv/lib/python3.9/site-packages/django/conf/urls/__init__.py)
5

My urls.py is as follows:

1  ...
2  File &quot;/path/to/myproject/myproject/urls.py&quot;, line 16, in &lt;module&gt;
3    from django.conf.urls import url
4ImportError: cannot import name 'url' from 'django.conf.urls' (/path/to/my/venv/lib/python3.9/site-packages/django/conf/urls/__init__.py)
5from django.conf.urls
6
7from myapp.views import home
8
9urlpatterns = [
10    url(r'^$', home, name=&quot;home&quot;),
11    url(r'^myapp/', include('myapp.urls'),
12]
13

ANSWER

Answered 2022-Jan-10 at 21:38

django.conf.urls.url() was deprecated in Django 3.0, and is removed in Django 4.0+.

The easiest fix is to replace url() with re_path(). re_path uses regexes like url, so you only have to update the import and replace url with re_path.

1  ...
2  File &quot;/path/to/myproject/myproject/urls.py&quot;, line 16, in &lt;module&gt;
3    from django.conf.urls import url
4ImportError: cannot import name 'url' from 'django.conf.urls' (/path/to/my/venv/lib/python3.9/site-packages/django/conf/urls/__init__.py)
5from django.conf.urls
6
7from myapp.views import home
8
9urlpatterns = [
10    url(r'^$', home, name=&quot;home&quot;),
11    url(r'^myapp/', include('myapp.urls'),
12]
13from django.urls import include, re_path
14
15from myapp.views import home
16
17urlpatterns = [
18    re_path(r'^$', home, name='home'),
19    re_path(r'^myapp/', include('myapp.urls'),
20]
21

Alternatively, you could switch to using path. path() does not use regexes, so you'll have to update your URL patterns if you switch to path.

1  ...
2  File &quot;/path/to/myproject/myproject/urls.py&quot;, line 16, in &lt;module&gt;
3    from django.conf.urls import url
4ImportError: cannot import name 'url' from 'django.conf.urls' (/path/to/my/venv/lib/python3.9/site-packages/django/conf/urls/__init__.py)
5from django.conf.urls
6
7from myapp.views import home
8
9urlpatterns = [
10    url(r'^$', home, name=&quot;home&quot;),
11    url(r'^myapp/', include('myapp.urls'),
12]
13from django.urls import include, re_path
14
15from myapp.views import home
16
17urlpatterns = [
18    re_path(r'^$', home, name='home'),
19    re_path(r'^myapp/', include('myapp.urls'),
20]
21from django.urls import include, path
22
23from myapp.views import home
24
25urlpatterns = [
26    path('', home, name='home'),
27    path('myapp/', include('myapp.urls'),
28]
29

If you have a large project with many URL patterns to update, you may find the django-upgrade library useful to update your urls.py files.

Source https://stackoverflow.com/questions/70319606

QUESTION

How did print(*a, a.pop(0)) change?

Asked 2022-Feb-04 at 21:21

This code:

1a = [1, 2, 3]
2print(*a, a.pop(0))
3

Python 3.8 prints 2 3 1 (does the pop before unpacking).
Python 3.9 prints 1 2 3 1 (does the pop after unpacking).

What caused the change? I didn't find it in the changelog.

Edit: Not just in function calls but also for example in a list display:

1a = [1, 2, 3]
2print(*a, a.pop(0))
3a = [1, 2, 3]
4b = [*a, a.pop(0)]
5print(b)
6

Prints [2, 3, 1] vs [1, 2, 3, 1]. And Expression lists says "The expressions are evaluated from left to right" (that's the link to Python 3.8 documentation), so I'd expect the unpacking expression to happen first.

ANSWER

Answered 2022-Feb-04 at 21:21

I suspect this may have been an accident, though I prefer the new behavior.

The new behavior is a consequence of a change to how the bytecode for * arguments works. The change is in the changelog under Python 3.9.0 alpha 3:

bpo-39320: Replace four complex bytecodes for building sequences with three simpler ones.

The following four bytecodes have been removed:

  • BUILD_LIST_UNPACK
  • BUILD_TUPLE_UNPACK
  • BUILD_SET_UNPACK
  • BUILD_TUPLE_UNPACK_WITH_CALL

The following three bytecodes have been added:

  • LIST_TO_TUPLE
  • LIST_EXTEND
  • SET_UPDATE

On Python 3.8, the bytecode for f(*a, a.pop()) looks like this:

1a = [1, 2, 3]
2print(*a, a.pop(0))
3a = [1, 2, 3]
4b = [*a, a.pop(0)]
5print(b)
6  1           0 LOAD_NAME                0 (f)
7              2 LOAD_NAME                1 (a)
8              4 LOAD_NAME                1 (a)
9              6 LOAD_METHOD              2 (pop)
10              8 CALL_METHOD              0
11             10 BUILD_TUPLE              1
12             12 BUILD_TUPLE_UNPACK_WITH_CALL     2
13             14 CALL_FUNCTION_EX         0
14             16 RETURN_VALUE
15

while on 3.9, it looks like this:

1a = [1, 2, 3]
2print(*a, a.pop(0))
3a = [1, 2, 3]
4b = [*a, a.pop(0)]
5print(b)
6  1           0 LOAD_NAME                0 (f)
7              2 LOAD_NAME                1 (a)
8              4 LOAD_NAME                1 (a)
9              6 LOAD_METHOD              2 (pop)
10              8 CALL_METHOD              0
11             10 BUILD_TUPLE              1
12             12 BUILD_TUPLE_UNPACK_WITH_CALL     2
13             14 CALL_FUNCTION_EX         0
14             16 RETURN_VALUE
15  1           0 LOAD_NAME                0 (f)
16              2 BUILD_LIST               0
17              4 LOAD_NAME                1 (a)
18              6 LIST_EXTEND              1
19              8 LOAD_NAME                1 (a)
20             10 LOAD_METHOD              2 (pop)
21             12 CALL_METHOD              0
22             14 LIST_APPEND              1
23             16 LIST_TO_TUPLE
24             18 CALL_FUNCTION_EX         0
25             20 RETURN_VALUE
26

In the old bytecode, the code pushes a and (a.pop(),) onto the stack, then unpacks those two iterables into a tuple. In the new bytecode, the code pushes a list onto the stack, then does l.extend(a) and l.append(a.pop()), then calls tuple(l).

This change has the effect of shifting the unpacking of a to before the pop call, but this doesn't seem to have been deliberate. Looking at bpo-39320, the intent was to simplify the bytecode instructions, not to change the behavior, and the bpo thread has no discussion of behavior changes.

Source https://stackoverflow.com/questions/70404485

Community Discussions contain sources that include Stack Exchange Network

Tutorials and Learning Resources in Python

Share this Page

share link

Get latest updates on Python