# Python | All Algorithms implemented in Python | Learning library

## kandi X-RAY | Python Summary

## Support

## Quality

## Security

## License

## Reuse

- Decrypt a caesar ciphertext using the provided ciphertext
- Convert word to upper case
- Join a separated list of strings
- Convert word to lowercase letter

- Merge a collection of items into a sorted list
- Append a new node to the list
- Returns True if the queue is empty

- Return True if the given number is a prime number
- Random rabin - miller

- Removes the minimum value from the heap
- Perform similarity search
- Calculate the fulladder
- Convert an image using scipy
- Implementation of prism algorithm
- Train the model
- Test for ness
- R Pollard s rho algorithm
- Encrypts the text using the ciphertext
- Convert date_input to zeller format
- Generate a random population
- Convert from_type to to_type
- Grammar search algorithm
- Generate report for clustering
- The Jacobi iteration method
- Convert coordinates to a polynomial
- Generate a power solution

## Python Key Features

## Python Examples and Code Snippets

Trending Discussions on Python

Trending Discussions on Python

QUESTION

I am trying to get a Flask and Docker application to work but when I try and run it using my `docker-compose up`

command in my Visual Studio terminal, it gives me an ImportError called `ImportError: cannot import name 'json' from itsdangerous`

. I have tried to look for possible solutions to this problem but as of right now there are not many on here or anywhere else. The only two solutions I could find are to change the current installation of MarkupSafe and itsdangerous to a higher version: https://serverfault.com/questions/1094062/from-itsdangerous-import-json-as-json-importerror-cannot-import-name-json-fr and another one on GitHub that tells me to essentially change the MarkUpSafe and itsdangerous installation again https://github.com/aws/aws-sam-cli/issues/3661, I have also tried to make a virtual environment named `veganetworkscriptenv`

to install the packages but that has also failed as well. I am currently using Flask 2.0.0 and Docker 5.0.0 and the error occurs on line eight in vegamain.py.

Here is the full ImportError that I get when I try and run the program:

```
veganetworkscript-backend-1 | Traceback (most recent call last):
veganetworkscript-backend-1 | File "/app/vegamain.py", line 8, in
veganetworkscript-backend-1 | from flask import Flask
veganetworkscript-backend-1 | File "/usr/local/lib/python3.9/site-packages/flask/__init__.py", line 19, in
veganetworkscript-backend-1 | from . import json
veganetworkscript-backend-1 | File "/usr/local/lib/python3.9/site-packages/flask/json/__init__.py", line 15, in
veganetworkscript-backend-1 | from itsdangerous import json as _json
veganetworkscript-backend-1 | ImportError: cannot import name 'json' from 'itsdangerous' (/usr/local/lib/python3.9/site-packages/itsdangerous/__init__.py)
veganetworkscript-backend-1 exited with code 1
```

Here are my requirements.txt, vegamain.py, Dockerfile, and docker-compose.yml files:

requirements.txt:

```
Flask==2.0.0
Flask-SQLAlchemy==2.4.4
SQLAlchemy==1.3.20
Flask-Migrate==2.5.3
Flask-Script==2.0.6
Flask-Cors==3.0.9
requests==2.25.0
mysqlclient==2.0.1
pika==1.1.0
wolframalpha==4.3.0
```

vegamain.py:

```
# Veganetwork (C) TetraSystemSolutions 2022
# all rights are reserved.
#
# Author: Trevor R. Blanchard Feb-19-2022-Jul-30-2022
#
# get our imports in order first
from flask import Flask # <-- error occurs here!!!
# start the application through flask.
app = Flask(__name__)
# if set to true will return only a "Hello World" string.
Debug = True
# start a route to the index part of the app in flask.
@app.route('/')
def index():
if (Debug == True):
return 'Hello World!'
else:
pass
# start the flask app here --->
if __name__ == '__main__':
app.run(debug=True, host='0.0.0.0')
```

Dockerfile:

```
FROM python:3.9
ENV PYTHONUNBUFFERED 1
WORKDIR /app
COPY requirements.txt /app/requirements.txt
RUN pip install -r requirements.txt
COPY . /app
```

docker-compose.yml:

```
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
command: 'python vegamain.py'
ports:
- 8004:5000
volumes:
- .:/app
depends_on:
- db
# queue:
# build:
# context: .
# dockerfile: Dockerfile
# command: 'python -u consumer.py'
# depends_on:
# - db
db:
image: mysql:5.7.22
restart: always
environment:
MYSQL_DATABASE: admin
MYSQL_USER: root
MYSQL_PASSWORD: root
MYSQL_ROOT_PASSWORD: root
volumes:
- .dbdata:/var/lib/mysql
ports:
- 33069:3306
```

How exactly can I fix this code? thank you!

ANSWER

Answered 2022-Feb-20 at 12:31I was facing the same issue while running docker containers with flask.

I downgraded `Flask`

to `1.1.4`

and `markupsafe`

to `2.0.1`

which solved my issue.

Check this for reference.

QUESTION

Here are two measurements:

```
timeit.timeit('"toto"=="1234"', number=100000000)
1.8320042459999968
timeit.timeit('"toto"=="toto"', number=100000000)
1.4517491540000265
```

As you can see, comparing two strings that match is faster than comparing two strings with the same size that do not match. This is quite disturbing: During a string comparison, I believed that Python was testing strings character by character, so `"toto"=="toto"`

should be longer to test than `"toto"=="1234"`

as it requires four tests against one for the non-matching comparison. Maybe the comparison is hash-based, but in this case, timings should be the same for both comparisons.

Why?

ANSWER

Answered 2022-Mar-30 at 11:57Combining my comment and the comment by @khelwood:

**TL;DR:**

When analysing the bytecode for the two comparisons, it reveals the `'time'`

and `'time'`

strings are assigned to the same object. Therefore, an up-front *identity check* (at C-level) is the reason for the increased comparison speed.

The reason for the same object assignment is that, as an *implementation detail*, CPython interns strings which contain only 'name characters' (i.e. alpha and underscore characters). This enables the object's identity check.

**Bytecode:**

```
import dis
In [24]: dis.dis("'time'=='time'")
1 0 LOAD_CONST 0 ('time') # <-- same object (0)
2 LOAD_CONST 0 ('time') # <-- same object (0)
4 COMPARE_OP 2 (==)
6 RETURN_VALUE
In [25]: dis.dis("'time'=='1234'")
1 0 LOAD_CONST 0 ('time') # <-- different object (0)
2 LOAD_CONST 1 ('1234') # <-- different object (1)
4 COMPARE_OP 2 (==)
6 RETURN_VALUE
```

**Assignment Timing:**

The 'speed-up' can also be seen in using assignment for the time tests. The assignment (and compare) of two variables to the same string, is faster than the assignment (and compare) of two variables to different strings. Further supporting the hypothesis the underlying logic is performing an object comparison. This is confirmed in the next section.

```
In [26]: timeit.timeit("x='time'; y='time'; x==y", number=1000000)
Out[26]: 0.0745926329982467
In [27]: timeit.timeit("x='time'; y='1234'; x==y", number=1000000)
Out[27]: 0.10328884399496019
```

**Python source code:**

As helpfully provided by @mkrieger1 and @Masklinn in their comments, the source code for `unicodeobject.c`

performs a pointer comparison first and if `True`

, returns immediately.

```
int
_PyUnicode_Equal(PyObject *str1, PyObject *str2)
{
assert(PyUnicode_CheckExact(str1));
assert(PyUnicode_CheckExact(str2));
if (str1 == str2) { // <-- Here
return 1;
}
if (PyUnicode_READY(str1) || PyUnicode_READY(str2)) {
return -1;
}
return unicode_compare_eq(str1, str2);
}
```

**Appendix:**

- Reference answer nicely illustrating how to read the disassembled bytecode output. Courtesy of @Delgan
- Reference answer which nicely describes CPython's string interning. Coutresy of @ShadowRanger

QUESTION

I saw a video about speed of loops in python, where it was explained that doing `sum(range(N))`

is much faster than manually looping through `range`

and adding the variables together, since the former runs in C due to built-in functions being used, while in the latter the summation is done in (slow) python. I was curious what happens when adding `numpy`

to the mix. As I expected `np.sum(np.arange(N))`

is the fastest, but `sum(np.arange(N))`

and `np.sum(range(N))`

are even slower than doing the naive for loop.

Why is this?

Here's the script I used to test, some comments about the supposed cause of slowing done where I know (taken mostly from the video) and the results I got on my machine (python 3.10.0, numpy 1.21.2):

**updated script:**

```
import numpy as np
from timeit import timeit
N = 10_000_000
repetition = 10
def sum0(N = N):
s = 0
i = 0
while i < N: # condition is checked in python
s += i
i += 1 # both additions are done in python
return s
def sum1(N = N):
s = 0
for i in range(N): # increment in C
s += i # addition in python
return s
def sum2(N = N):
return sum(range(N)) # everything in C
def sum3(N = N):
return sum(list(range(N)))
def sum4(N = N):
return np.sum(range(N)) # very slow np.array conversion
def sum5(N = N):
# much faster np.array conversion
return np.sum(np.fromiter(range(N),dtype = int))
def sum5v2_(N = N):
# much faster np.array conversion
return np.sum(np.fromiter(range(N),dtype = np.int_))
def sum6(N = N):
# possibly slow conversion to Py_long from np.int
return sum(np.arange(N))
def sum7(N = N):
# list returns a list of np.int-s
return sum(list(np.arange(N)))
def sum7v2(N = N):
# tolist conversion to python int seems faster than the implicit conversion
# in sum(list()) (tolist returns a list of python int-s)
return sum(np.arange(N).tolist())
def sum8(N = N):
return np.sum(np.arange(N)) # everything in numpy (fortran libblas?)
def sum9(N = N):
return np.arange(N).sum() # remove dispatch overhead
def array_basic(N = N):
return np.array(range(N))
def array_dtype(N = N):
return np.array(range(N),dtype = np.int_)
def array_iter(N = N):
# np.sum's source code mentions to use fromiter to convert from generators
return np.fromiter(range(N),dtype = np.int_)
print(f"while loop: {timeit(sum0, number = repetition)}")
print(f"for loop: {timeit(sum1, number = repetition)}")
print(f"sum_range: {timeit(sum2, number = repetition)}")
print(f"sum_rangelist: {timeit(sum3, number = repetition)}")
print(f"npsum_range: {timeit(sum4, number = repetition)}")
print(f"npsum_iterrange: {timeit(sum5, number = repetition)}")
print(f"npsum_iterrangev2: {timeit(sum5, number = repetition)}")
print(f"sum_arange: {timeit(sum6, number = repetition)}")
print(f"sum_list_arange: {timeit(sum7, number = repetition)}")
print(f"sum_arange_tolist: {timeit(sum7v2, number = repetition)}")
print(f"npsum_arange: {timeit(sum8, number = repetition)}")
print(f"nparangenpsum: {timeit(sum9, number = repetition)}")
print(f"array_basic: {timeit(array_basic, number = repetition)}")
print(f"array_dtype: {timeit(array_dtype, number = repetition)}")
print(f"array_iter: {timeit(array_iter, number = repetition)}")
print(f"npsumarangeREP: {timeit(lambda : sum8(N/1000), number = 100000*repetition)}")
print(f"npsumarangeREP: {timeit(lambda : sum9(N/1000), number = 100000*repetition)}")
# Example output:
#
# while loop: 11.493371912998555
# for loop: 7.385945574002108
# sum_range: 2.4605720699983067
# sum_rangelist: 4.509678105998319
# npsum_range: 11.85120212900074
# npsum_iterrange: 4.464334709002287
# npsum_iterrangev2: 4.498494338993623
# sum_arange: 9.537815956995473
# sum_list_arange: 13.290120724996086
# sum_arange_tolist: 5.231948580003518
# npsum_arange: 0.241889145996538
# nparangenpsum: 0.21876695199898677
# array_basic: 11.736577274998126
# array_dtype: 8.71628468400013
# array_iter: 4.303306431000237
# npsumarangeREP: 21.240833958996518
# npsumarangeREP: 16.690092379001726
```

ANSWER

Answered 2021-Oct-16 at 17:42From the cpython source code for `sum`

sum initially seems to attempt a fast path that assumes all inputs are the same type. If that fails it will just iterate:

```
/* Fast addition by keeping temporary sums in C instead of new Python objects.
Assumes all inputs are the same type. If the assumption fails, default
to the more general routine.
*/
```

I'm not entirely certain what is happening under the hood, but it is likely the repeated creation/conversion of C types to Python objects that is causing these slow-downs. It's worth noting that both `sum`

and `range`

are implemented in C.

This next bit is not really an answer to the question, but I wondered if we could speed up `sum`

for python `range`

s as `range`

is quite a smart object.

To do this I've used `functools.singledispatch`

to override the built-in `sum`

function specifically for the `range`

type; then implemented a small function to calculate the sum of an arithmetic progression.

```
from functools import singledispatch
def sum_range(range_, /, start=0):
"""Overloaded `sum` for range, compute arithmetic sum"""
n = len(range_)
if not n:
return start
return int(start + (n * (range_[0] + range_[-1]) / 2))
sum = singledispatch(sum)
sum.register(range, sum_range)
def test():
"""
>>> sum(range(0, 100))
4950
>>> sum(range(0, 10, 2))
20
>>> sum(range(0, 9, 2))
20
>>> sum(range(0, -10, -1))
-45
>>> sum(range(-10, 10))
-10
>>> sum(range(-1, -100, -2))
-2500
>>> sum(range(0, 10, 100))
0
>>> sum(range(0, 0))
0
>>> sum(range(0, 100), 50)
5000
>>> sum(range(0, 0), 10)
10
"""
if __name__ == "__main__":
import doctest
doctest.testmod()
```

I'm not sure if this is complete, but it's definitely faster than looping.

QUESTION

version pip 21.2.4 python 3.6

The command:

```
pip install -r requirments.txt
```

The content of my `requirements.txt`

:

```
mongoengine==0.19.1
numpy==1.16.2
pylint
pandas==1.1.5
fawkes
```

The command is failing with this error

```
ERROR: Command errored out with exit status 1:
command: /Users/*/Desktop/ml/*/venv/bin/python -c 'import io, os, sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/kn/0y92g7x55qs7c42tln4gwhtm0000gp/T/pip-install-soh30mel/mongoengine_89e68f8427244f1bb3215b22f77a619c/setup.py'"'"'; __file__='"'"'/private/var/folders/kn/0y92g7x55qs7c42tln4gwhtm0000gp/T/pip-install-soh30mel/mongoengine_89e68f8427244f1bb3215b22f77a619c/setup.py'"'"';f = getattr(tokenize, '"'"'open'"'"', open)(__file__) if os.path.exists(__file__) else io.StringIO('"'"'from setuptools import setup; setup()'"'"');code = f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' egg_info --egg-base /private/var/folders/kn/0y92g7x55qs7c42tln4gwhtm0000gp/T/pip-pip-egg-info-97994d6e
cwd: /private/var/folders/kn/0y92g7x55qs7c42tln4gwhtm0000gp/T/pip-install-soh30mel/mongoengine_89e68f8427244f1bb3215b22f77a619c/
Complete output (1 lines):
error in mongoengine setup command: use_2to3 is invalid.
----------------------------------------
WARNING: Discarding https://*/pypi/packages/mongoengine-0.19.1.tar.gz#md5=68e613009f6466239158821a102ac084 (from https://*/pypi/simple/mongoengine/). Command errored out with exit status 1: python setup.py egg_info Check the logs for full command output.
ERROR: Could not find a version that satisfies the requirement mongoengine==0.19.1 (from versions: 0.15.0, 0.19.1)
ERROR: No matching distribution found for mongoengine==0.19.1
```

ANSWER

Answered 2021-Nov-19 at 13:30It looks like `setuptools>=58`

breaks support for `use_2to3`

:

So you should update `setuptools`

to `setuptools<58`

or avoid using packages with `use_2to3`

in the setup parameters.

I was having the same problem, `pip==19.3.1`

QUESTION

I have an array of positive integers. For example:

```
[1, 7, 8, 4, 2, 1, 4]
```

A "reduction operation" finds the array prefix with the highest average, and deletes it. Here, an array prefix means a contiguous subarray whose left end is the start of the array, such as `[1]`

or `[1, 7]`

or `[1, 7, 8]`

above. Ties are broken by taking the longer prefix.

```
Original array: [ 1, 7, 8, 4, 2, 1, 4]
Prefix averages: [1.0, 4.0, 5.3, 5.0, 4.4, 3.8, 3.9]
-> Delete [1, 7, 8], with maximum average 5.3
-> New array -> [4, 2, 1, 4]
```

I will repeat the reduction operation until the array is empty:

```
[1, 7, 8, 4, 2, 1, 4]
^ ^
[4, 2, 1, 4]
^ ^
[2, 1, 4]
^ ^
[]
```

Now, actually performing these array modifications isn't necessary; I'm only looking for the list of lengths of prefixes that *would be deleted* by this process, for example, `[3, 1, 3]`

above.

What is an efficient algorithm for computing these prefix lengths?

The naive approach is to recompute all sums and averages from scratch in every iteration for an `O(n^2)`

algorithm-- I've attached Python code for this below. I'm looking for any improvement on this approach-- most preferably, any solution below `O(n^2)`

, but an algorithm with the same complexity but better constant factors would also be helpful.

Here are a few of the things I've tried (without success):

- Dynamically maintaining prefix sums, for example with a Binary Indexed Tree. While I can easily update prefix
*sums*or find a maximum prefix*sum*in`O(log n)`

time, I haven't found any data structure which can update the*average*, as the denominator in the average is changing. - Reusing the previous 'rankings' of prefix averages-- these rankings can change, e.g. in some array, the prefix ending at index
`5`

may have a larger average than the prefix ending at index`6`

, but after removing the first 3 elements, now the prefix ending at index`2`

may have a*smaller*average than the one ending at`3`

. - Looking for patterns in where prefixes end; for example, the rightmost element of any max average prefix is always a local maximum in the array, but it's not clear how much this helps.

This is a working Python implementation of the naive, quadratic method:

```
from fractions import Fraction
def find_array_reductions(nums: List[int]) -> List[int]:
"""Return list of lengths of max average prefix reductions."""
def max_prefix_avg(arr: List[int]) -> Tuple[float, int]:
"""Return value and length of max average prefix in arr."""
if len(arr) == 0:
return (-math.inf, 0)
best_length = 1
best_average = Fraction(0, 1)
running_sum = 0
for i, x in enumerate(arr, 1):
running_sum += x
new_average = Fraction(running_sum, i)
if new_average >= best_average:
best_average = new_average
best_length = i
return (float(best_average), best_length)
removed_lengths = []
total_removed = 0
while total_removed < len(nums):
_, new_removal = max_prefix_avg(nums[total_removed:])
removed_lengths.append(new_removal)
total_removed += new_removal
return removed_lengths
```

Edit: The originally published code had a rare error with large inputs from using Python's `math.isclose()`

with default parameters for floating point comparison, rather than proper fraction comparison. This has been fixed in the current code. An example of the error can be found at this Try it online link, along with a foreword explaining exactly what causes this bug, if you're curious.

ANSWER

Answered 2022-Feb-27 at 22:44This problem has a fun O(n) solution.

If you draw a graph of cumulative sum vs index, then:

The average value in the subarray between any two indexes is the slope of the line between those points on the graph.

The first highest-average-prefix will end at the point that makes the highest angle from 0. The next highest-average-prefix must then have a *smaller* average, and it will end at the point that makes the highest angle from the first ending. Continuing to the end of the array, we find that...

These segments of highest average are exactly the segments in the **upper convex hull of the cumulative sum graph**.

Find these segments using the monotone chain algorithm. Since the points are already sorted, it takes O(n) time.

```
# Lengths of the segments in the upper convex hull
# of the cumulative sum graph
def upperSumHullLengths(arr):
if len(arr) < 2:
if len(arr) < 1:
return []
else:
return [1]
hull = [(0, 0),(1, arr[0])]
for x in range(2, len(arr)+1):
# this has x coordinate x-1
prevPoint = hull[len(hull) - 1]
# next point in cumulative sum
point = (x, prevPoint[1] + arr[x-1])
# remove points not on the convex hull
while len(hull) >= 2:
p0 = hull[len(hull)-2]
dx0 = prevPoint[0] - p0[0]
dy0 = prevPoint[1] - p0[1]
dx1 = x - prevPoint[0]
dy1 = point[1] - prevPoint[1]
if dy1*dx0 < dy0*dx1:
break
hull.pop()
prevPoint = p0
hull.append(point)
return [hull[i+1][0] - hull[i][0] for i in range(0, len(hull)-1)]
print(upperSumHullLengths([ 1, 7, 8, 4, 2, 1, 4]))
```

prints:

```
[3, 1, 3]
```

QUESTION

I am making simple image of my python Django app in Docker. But at the end of the building container it throws next warning (I am building it on Ubuntu 20.04):

```
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead
```

Why does it throw this warning if I am installing Python requirements inside my image? I am building my image using:

```
sudo docker build -t my_app:1 .
```

Should I be worried about warning that pip throws, because I know it can break my system?

Here is my Dockerfile:

```
FROM python:3.8-slim-buster
WORKDIR /app
COPY requirements.txt requirements.txt
RUN pip install -r requirements.txt
COPY . .
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
```

ANSWER

Answered 2021-Aug-29 at 08:12The way your container is built doesn't add a user, so everything is done as root.

You could create a user and install to that users's home directory by doing something like this;

```
FROM python:3.8.3-alpine
RUN pip install --upgrade pip
RUN adduser -D myuser
USER myuser
WORKDIR /home/myuser
COPY --chown=myuser:myuser requirements.txt requirements.txt
RUN pip install --user -r requirements.txt
ENV PATH="/home/myuser/.local/bin:${PATH}"
COPY --chown=myuser:myuser . .
CMD ["python", "manage.py", "runserver", "0.0.0.0:8000"]
```

QUESTION

I need to calculate the square root of some numbers, for example `√9 = 3`

and `√2 = 1.4142`

. How can I do it in Python?

The inputs will probably be all positive integers, and relatively small (say less than a billion), but just in case they're not, is there anything that might break?

**Related**

- Integer square root in python
- Is there a short-hand for nth root of x in Python?
- Difference between **(1/2), math.sqrt and cmath.sqrt?
- Why is math.sqrt() incorrect for large numbers?
- Python sqrt limit for very large numbers?
- Which is faster in Python: x**.5 or math.sqrt(x)?
- Why does Python give the "wrong" answer for square root? (specific to Python 2)
- calculating n-th roots using Python 3's decimal module
- How can I take the square root of -1 using python? (focused on NumPy)
- Arbitrary precision of square roots

_{Note: This is an attempt at a canonical question after a discussion on Meta about an existing question with the same title.}

ANSWER

Answered 2022-Feb-04 at 19:44`math.sqrt()`

The `math`

module from the standard library has a `sqrt`

function to calculate the square root of a number. It takes any type that can be converted to `float`

(which includes `int`

) as an argument and returns a `float`

.

```
>>> import math
>>> math.sqrt(9)
3.0
```

The power operator (`**`

) or the built-in `pow()`

function can also be used to calculate a square root. Mathematically speaking, the square root of `a`

equals `a`

to the power of `1/2`

.

The power operator requires numeric types and matches the conversion rules for binary arithmetic operators, so in this case it will return either a `float`

or a `complex`

number.

```
>>> 9 ** (1/2)
3.0
>>> 9 ** .5 # Same thing
3.0
>>> 2 ** .5
1.4142135623730951
```

(Note: in Python 2, `1/2`

is truncated to `0`

, so you have to force floating point arithmetic with `1.0/2`

or similar. See Why does Python give the "wrong" answer for square root?)

This method can be generalized to nth root, though fractions that can't be exactly represented as a `float`

(like 1/3 or any denominator that's not a power of 2) may cause some inaccuracy:

```
>>> 8 ** (1/3)
2.0
>>> 125 ** (1/3)
4.999999999999999
```

Exponentiation works with negative numbers and complex numbers, though the results have some slight inaccuracy:

```
>>> (-25) ** .5 # Should be 5j
(3.061616997868383e-16+5j)
>>> 8j ** .5 # Should be 2+2j
(2.0000000000000004+2j)
```

Note the parentheses on `-25`

! Otherwise it's parsed as `-(25**.5)`

because exponentiation is more tightly binding than unary negation.

Meanwhile, `math`

is only built for floats, so for `x<0`

, `math.sqrt()`

will raise `ValueError: math domain error`

and for complex `x`

, it'll raise `TypeError: can't convert complex to float`

. Instead, you can use `cmath.sqrt()`

, which is more more accurate than exponentiation (and will likely be faster too):

```
>>> import cmath
>>> cmath.sqrt(-25)
5j
>>> cmath.sqrt(8j)
(2+2j)
```

Both options involve an implicit conversion to `float`

, so floating point precision is a factor. For example:

```
>>> n = 10**30
>>> square = n**2
>>> x = square**.5
>>> x == n
False
>>> x - n # how far off are they?
0.0
>>> int(x) - n # how far off is the float from the int?
19884624838656
```

Very large numbers might not even fit in a float and you'll get `OverflowError: int too large to convert to float`

. See Python sqrt limit for very large numbers?

Let's look at `Decimal`

for example:

Exponentiation fails unless the exponent is also `Decimal`

:

```
>>> decimal.Decimal('9') ** .5
Traceback (most recent call last):
File "", line 1, in
TypeError: unsupported operand type(s) for ** or pow(): 'decimal.Decimal' and 'float'
>>> decimal.Decimal('9') ** decimal.Decimal('.5')
Decimal('3.000000000000000000000000000')
```

Meanwhile, `math`

and `cmath`

will silently convert their arguments to `float`

and `complex`

respectively, which could mean loss of precision.

`decimal`

also has its own `.sqrt()`

. See also calculating n-th roots using Python 3's decimal module

QUESTION

I have a dockerfile that currently only installs pip-tools

```
FROM python:3.9
RUN pip install --upgrade pip && \
pip install pip-tools
COPY ./ /root/project
WORKDIR /root/project
ENTRYPOINT ["tail", "-f", "/dev/null"]
```

I build and open a shell in the container using the following commands:

```
docker build -t brunoapi_image .
docker run --rm -ti --name brunoapi_container --entrypoint bash brunoapi_image
```

Then, when I try to run `pip-compile`

inside the container I get this very weird error (full traceback):

```
root@727f1f38f095:~/project# pip-compile
Traceback (most recent call last):
File "/usr/local/bin/pip-compile", line 8, in
sys.exit(cli())
File "/usr/local/lib/python3.9/site-packages/click/core.py", line 1128, in __call__
return self.main(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/click/core.py", line 1053, in main
rv = self.invoke(ctx)
File "/usr/local/lib/python3.9/site-packages/click/core.py", line 1395, in invoke
return ctx.invoke(self.callback, **ctx.params)
File "/usr/local/lib/python3.9/site-packages/click/core.py", line 754, in invoke
return __callback(*args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/click/decorators.py", line 26, in new_func
return f(get_current_context(), *args, **kwargs)
File "/usr/local/lib/python3.9/site-packages/piptools/scripts/compile.py", line 342, in cli
repository = PyPIRepository(pip_args, cache_dir=cache_dir)
File "/usr/local/lib/python3.9/site-packages/piptools/repositories/pypi.py", line 106, in __init__
self._setup_logging()
File "/usr/local/lib/python3.9/site-packages/piptools/repositories/pypi.py", line 455, in _setup_logging
assert isinstance(handler, logging.StreamHandler)
AssertionError
```

I have no clue what's going on and I've never seen this error before. Can anyone shed some light into this?

Running on macOS Monterey

ANSWER

Answered 2022-Feb-05 at 16:30It is a bug, you can downgrade using:

`pip install "pip<22"`

QUESTION

After upgrading to Django 4.0, I get the following error when running `python manage.py runserver`

```
...
File "/path/to/myproject/myproject/urls.py", line 16, in
from django.conf.urls import url
ImportError: cannot import name 'url' from 'django.conf.urls' (/path/to/my/venv/lib/python3.9/site-packages/django/conf/urls/__init__.py)
```

My urls.py is as follows:

```
from django.conf.urls
from myapp.views import home
urlpatterns = [
url(r'^$', home, name="home"),
url(r'^myapp/', include('myapp.urls'),
]
```

ANSWER

Answered 2022-Jan-10 at 21:38`django.conf.urls.url()`

was deprecated in Django 3.0, and is removed in Django 4.0+.

The easiest fix is to replace `url()`

with `re_path()`

. `re_path`

uses regexes like `url`

, so you only have to update the import and replace `url`

with `re_path`

.

```
from django.urls import include, re_path
from myapp.views import home
urlpatterns = [
re_path(r'^$', home, name='home'),
re_path(r'^myapp/', include('myapp.urls'),
]
```

Alternatively, you could switch to using `path`

. `path()`

does not use regexes, so you'll have to update your URL patterns if you switch to path.

```
from django.urls import include, path
from myapp.views import home
urlpatterns = [
path('', home, name='home'),
path('myapp/', include('myapp.urls'),
]
```

If you have a large project with many URL patterns to update, you may find the django-upgrade library useful to update your `urls.py`

files.

QUESTION

This code:

```
a = [1, 2, 3]
print(*a, a.pop(0))
```

Python 3.8 prints `2 3 1`

(does the `pop`

before unpacking).

Python 3.9 prints `1 2 3 1`

(does the `pop`

after unpacking).

What caused the change? I didn't find it in the changelog.

Edit: Not just in function calls but also for example in a list display:

```
a = [1, 2, 3]
b = [*a, a.pop(0)]
print(b)
```

Prints `[2, 3, 1]`

vs `[1, 2, 3, 1]`

. And Expression lists says *"The expressions are evaluated from left to right"* (that's the link to Python 3.8 documentation), so I'd expect the unpacking expression to happen first.

ANSWER

Answered 2022-Feb-04 at 21:21I suspect this may have been an accident, though I prefer the new behavior.

The new behavior is a consequence of a change to how the bytecode for `*`

arguments works. The change is in the changelog under Python 3.9.0 alpha 3:

bpo-39320: Replace four complex bytecodes for building sequences with three simpler ones.

The following four bytecodes have been removed:

- BUILD_LIST_UNPACK
- BUILD_TUPLE_UNPACK
- BUILD_SET_UNPACK
- BUILD_TUPLE_UNPACK_WITH_CALL
The following three bytecodes have been added:

- LIST_TO_TUPLE
- LIST_EXTEND
- SET_UPDATE

On Python 3.8, the bytecode for `f(*a, a.pop())`

looks like this:

```
1 0 LOAD_NAME 0 (f)
2 LOAD_NAME 1 (a)
4 LOAD_NAME 1 (a)
6 LOAD_METHOD 2 (pop)
8 CALL_METHOD 0
10 BUILD_TUPLE 1
12 BUILD_TUPLE_UNPACK_WITH_CALL 2
14 CALL_FUNCTION_EX 0
16 RETURN_VALUE
```

while on 3.9, it looks like this:

```
1 0 LOAD_NAME 0 (f)
2 BUILD_LIST 0
4 LOAD_NAME 1 (a)
6 LIST_EXTEND 1
8 LOAD_NAME 1 (a)
10 LOAD_METHOD 2 (pop)
12 CALL_METHOD 0
14 LIST_APPEND 1
16 LIST_TO_TUPLE
18 CALL_FUNCTION_EX 0
20 RETURN_VALUE
```

In the old bytecode, the code pushes `a`

and `(a.pop(),)`

onto the stack, then unpacks those two iterables into a tuple. In the new bytecode, the code pushes a list onto the stack, then does `l.extend(a)`

and `l.append(a.pop())`

, then calls `tuple(l)`

.

This change has the effect of shifting the unpacking of `a`

to before the `pop`

call, but this doesn't seem to have been deliberate. Looking at bpo-39320, the intent was to simplify the bytecode instructions, not to change the behavior, and the bpo thread has no discussion of behavior changes.

Community Discussions, Code Snippets contain sources that include Stack Exchange Network

## Vulnerabilities

## Install Python

You can use Python like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.

## Support

###### Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

Find more libraries###### Explore Kits - Develop, implement, customize Projects, Custom Functions and Applications with kandi kits

###### Save this library and start creating your kit

Share this Page