arbitrary-precision | :100 : Arbitrary precision | Apps library
kandi X-RAY | arbitrary-precision Summary
kandi X-RAY | arbitrary-precision Summary
:100: Arbitrary precision
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of arbitrary-precision
arbitrary-precision Key Features
arbitrary-precision Examples and Code Snippets
Community Discussions
Trending Discussions on arbitrary-precision
QUESTION
I was facing an issue to install laravel in my ubuntu. Please help me.
...ANSWER
Answered 2020-Sep-15 at 16:55I used this and It works for me.
QUESTION
For example in C#, C++, Java or JavaScript effective int size is 32 bits. If we want to calculate some large number, for example, 70 bits, we should use some software features (Arbitrary-precision arithmetic).
Python has a very tricky integer internal boundless representation and I'm can not figure out what is the most efficient int size for integer arithmetic.
In other words, do we have some int size, say 64 bits, for effective ints usage?
Or it does not matter is it 16
, 32
, 64
or some random bits count
, and Python will work for all these ints with the same efficiency?
In short does Python always uses Arbitrary-precision arithmetic or for 32\64 it uses hardware arithmetic?
...ANSWER
Answered 2020-Aug-16 at 17:14CPython's int
, in Python 3, is represented as a sign-magnitude array of values, where each element in the array represents 15 or 30 bits of the magnitude, for 32 and 64 bit Python builds respectively. This is an implementation detail, but a longstanding one (it was originally 15 all the time, but it was found to be an easy win to double the size and number of used bits per "digit" in the array when working on 64 bit systems). It has optimizations for int
s that fit into a single (or sometimes two) such array values (it extracts the raw value from the array and performs a single CPU operation, skipping loops and algorithms that apply to the arbitrary length case), and on 64 bit builds of CPython, that currently means values with magnitude of 30 bits or less are generally optimized specially (with 60 bit magnitudes occasionally having fast paths).
That said, there's rarely a reason to take this into account; CPython interpreter overhead is pretty high, and it's pretty hard to imagine a scenario where manually breaking down a larger operation into smaller ones (incurring more interpreter overhead to do many small operations at the Python layer) would outweigh the much smaller cost of Python doing the array-based operations (even without special fast paths) at the C layer. The exceptions to this rule would all rely on non-Python-int
-solutions, using fixed size numpy
arrays to vectorize the work, and largely following C rules at that point (since numpy
arrays are wrappers around raw C arrays most of the time).
QUESTION
If I understand correctly, the return value for floor division is always a whole number, even if the dividend and/or divisor are not whole numbers, so why does it not always return an integer.
It's detrimental in my case because converting from a large float to an int instead of having the return value as an arbitrary-precision integer obviously loses precision.
I can't see any function that does float floor division to return an integer. Obviously I could make a function to do so, e.g. by multiplying both values by the same amount so that they're both integers, but it would be a lot slower than a C implementation.
Here's an example: 5.2 // 2
is 2.0
not 2
.
ANSWER
Answered 2020-Aug-02 at 19:04In answer to your question why?, it is by design, and the rationale for this is in PEP 238:
Floor division will be implemented in all the Python numeric types, and will have the semantics of:
a // b == floor(a/b)
except that the result type will be the common type into which a and b are coerced before the operation.
Specifically, if a and b are of the same type,
a//b
will be of that type too. If the inputs are of different types, they are first coerced to a common type using the same rules used for all other arithmetic operators....
For floating point inputs, the result is a float. For example:
3.5//2.0 == 1.0
For complex numbers,
//
raises an exception, sincefloor()
of a complex number is not allowed.
This PEP dates back to Python 2.2. I've suppressed a paragraph that discusses the now obsolete distinction between int
and long
.
QUESTION
What is the easiest way to do arbitrary-precision (integral) arithmetics in APL?
Any known libraries? Or are you supposed to “inline” the operations (and how)?
...ANSWER
Answered 2019-Nov-11 at 21:44Dyalog APL includes the operator big
in the dfns workspace which allow arbitrary-precision arithmetic:
QUESTION
This has kinda been asked, but not in this way. I have a little Python program which finds continued fractions for square roots of n (1 <= n <= 10000).
I have been trying to do this in Julia and I can't see how to. Mainly because it deals with irrational numbers (sqrt(x) is irrational if x is not a perfect square, e.g. sqrt(2) = 1.414213...). So I don't think I can use the rational class.
It says here https://docs.julialang.org/en/latest/manual/integers-and-floating-point-numbers/#Arbitrary-Precision-Arithmetic-1 that Julia can do arbitrary precision arithmetic with BigFloats. But they don't seem to be accurate enough.
I have also tried to use PyCall and the Decimals package in Python (from Julia), but get weird errors (I can post them if they would be helpful).
Here is my Python program which works. And my question is how to do this in Julia please?
...ANSWER
Answered 2017-Aug-20 at 19:32Just like with Python's decimal.Decimal, you can configure the precision of Julia's BigFloat:
QUESTION
I need to divide a number N by another number D, both bigger than my word length which is 32 bits
Currently i'm using the algorithm found in here: http://justinparrtech.com/JustinParr-Tech/an-algorithm-for-arbitrary-precision-integer-division/
I'm implementing my solution for the RISC-V ISA
But in the third step when Q = N / A
, I don't know what to do in the case the remainder is also a 32-bit number, because usually i would use this remainder for the division of the next word but if it is the size of the register its impossible to take it into account.
I've been thinking how to solve this but every solution i come up with i feel its not the best way to do it.
...ANSWER
Answered 2019-May-20 at 04:07That algorithm is horrible.
The first step should be to determine the sign of the result (from the sign of the numerator and divisor); then find the magnitude of the numerator and divisor (and have short-cuts for the "numerator is 0" and "abs(divisor) is 0 or 1" cases where actually doing a division is avoidable or impossible) so that the code that does the actual division only ever deals with positive numbers.
The second step should be to determine if the divisor is small enough to fit in a single digit (with digits that are in whatever base is the largest that your language/environment supports - e.g. for C with 32-bit integers it might be "base 65536", and for 64-bit 80x86 assembly language (where you can use 128-bit numerators) it might be "base 18446744073709551616"). At this point you branch to one of 2 completely different algorithms.
Small Divisor
This is a relatively trivial "for each digit in numerator { divide digit by divisor to find the digit in the result}" loop (followed by fixing up the sign of the result that you determined at the start).
Large Divisor
For this I'd use binary division. The idea is to shift the numerator and divisor left/right so that divisor becomes as large as possible without being larger than the numerator. Then you subtract divisor from numerator and set the bit in the result that corresponds to how much you shifted left/right; and repeat this, so that (after the initial shifting) it ends up being "shift whatever remains of the numerator left; then compare to divisor, and if numerator is larger than divisor subtract divisor from numerator and set bit in result" in a loop that terminates when there's nothing remaining of the numerator).
Off-Topic Alternative
For most cases where you need arbitrary-precision division; it's better to use rational numbers where each number is stored as three (arbitrary sized) integers - numerator, divisor and exponent (like number = numerator/divisor * (1 << exponent)
). In this case you never need to divide - you only multiply by the reciprocal. This makes it better for performance, but also significantly better for precision (e.g. you can calculate (1/3) * 6
and guarantee that there will be no precision loss).
QUESTION
I have a very large Decimal number as a string, it cannot be accurately represented by a float or double, so I cannot use any function that works with numbers, I have to keep my number as string, and do all functions with strings.
For example: PI:
...ANSWER
Answered 2019-Apr-04 at 19:27Given you need precise decimal places, all of the libraries you mentioned are either out of bounds or will require setting absurd levels of precision to guarantee your results are rounded accurately.
The problem is they're all based on binary floating point, so when you specify a precision, e.g. with mpf_set_default_prec
in GMP/MPIR (or the equivalent call in MPFR, a fork of GMP that provides more guarantees on exact, portable precision), you're saying how many bits will be used for the significand (roughly, how many bits of integer precision exist before it is scaled by the exponent). While there is a rough correspondence to decimal precision (so using four bits of significand for every one decimal digit of precision needed is usually good enough for most purposes), the lack of decimal precision means explicit output rounding is always needed to avoid false precision, and there is always the risk of inconsistent rounding: a number that logically ends in 5
and should be subjected to rounding rules like round half-even, round half-up or round half-down might end up with false precision that makes it appear to not end exactly with 5
, so the rounding rules aren't applied.
What you want is a decimal math library, like Java's BigDecimal
, Python's decimal
, etc. The C/C++ standard library for this is libmpdec
, the core of the mpdecimal
package (on Python 3.3+, this is the backend for the decimal
module, so you can experiment with its behavior on Python before committing to using it in your C/C++ project). Since it's decimal based, you have guaranteed precision levels; if you set the precision to 4 and mode to ROUND_HALF_EVEN
, then parsing "XX.X45"
will consistently produce "XX.X4"
, while "XX.X75"
will consistently produce "XX.X8"
(the final 5
rounds the next digit to even numbers). You can also "quantize" to round to a specific precision after the decimal point (so your precision might be 100 to allow for 100 digits combined on left and right hand sides of the decimal point, but you can quantize relative to "0.0000"
to force rounding of the right hand of the decimal to exactly four digits).
Point is, if you want to perform exact decimal rounding of strings representing numbers in the reals in C/C++, take a look at libmpdec
.
For your particular use case, a Python version of the code (using the moneyfmt
recipe, because formatting numbers like "1.1E-26"
to avoid scientific notation is a pain) would look roughly like:
QUESTION
We know CPython promotes integers to long integers (which allow arbitrary-precision arithmetic) silently when the number gets bigger.
How can we detect overflow of int
and long long
in pure C?
ANSWER
Answered 2019-Apr-02 at 07:10You cannot detect signed int
overflow. You have to write your code to avoid it.
Signed int overflow is Undefined Behaviour and if it is present in your program, the program is invalid and the compiler is not required to generate any specific behaviour.
QUESTION
I'm working on a native C++/CLI class which performs integer arithmetic with multiple-precision values. Individual integers are represented by arrays of 64-bit unsigned integers. The sign is represented by a boolean value, negative values are stored with their absolute values, not as two's complements. This makes dealing with sign issues much easier. Currently I'm optimizing the multiplication operation. I've already done several optimization rounds, but still my function requires twice the time of the * operator of two .NET BigInteger values, which shows that there's still considerable potential for further optimization.
Before asking for help, let me show you what I've already tried. My first attempt was a naive approach: Multiply pairs of all 64-bit items using an elementary 64-to-128-bit multiplication, and shift/add the results. I don't show the code here, because it was terribly slow. The next attempt was a recursive divide-and-conquer algorithm, which turned out to be much better. In my implementation, both operands are split recursively in the middle, until two 64-bit values remain. These are multiplied yielding a 128-bit result. The collected elementary results are shift/added all the way up the recursion layers to yield the final result. This algorithm probably benefits from the fact that much less 64-to-128-bit elementary products need to be computed, which seems to be the main bottleneck.
So here's my code. The first snippet shows the top-level entry point:
...ANSWER
Answered 2019-Mar-25 at 13:24As we can see in the reference source, BigInteger in .NET uses a fairly slow multiplication algorithm, the usual quadratic time algorithm using 32x32->64 multiplies. But it is written with low overhead: iterative, few allocations, and no calls to non-inlinable ASM procedures. Partial products are added into the result immediately rather than materialized separately.
The non-inlinable ASM procedure can be replaced with the _umul128 intrinsic. The manual carry calculations (both the conditional +1
and determining the output carry) can be replaced by the _addcarry_u64
intrinsic.
Fancier algorithms such as Karatsuba multiplication and Toom-Cook multiplication can be effective, but not when the recursion is done all the way down to the single limb level - that is far past the point where the overhead outweighs the saved elementary multiplications. As a concrete example, this implementation of Java's BigInteger switches to Karatsuba for 80 limbs (2560 bits because they use 32 bit limbs), and to 3-way Toom-Cook for 240 limbs. Given that threshold of 80, with only 64 limbs I would not expect too much gain there anyway, if any.
QUESTION
Lets suppose I have a string containing a floating point number (e.g: "3.14159265358979") and I want to convert it into a floating point number. How would I go about calculating the exponent and mantissa to get the corresponding floating point representation?
How would I find a fitting exponent for the number I'm trying to convert? How would I calculate the corresponding mantissa value to represent the float?
PS: I want to write some code for arbitrary-precision float calculations.
...ANSWER
Answered 2019-Mar-14 at 17:13First, remember the sign (+ or −) and change the number to its absolute value.
Normalize the significand to be in [1, 2) and calculate the exponent:
If the number is greater than two, divide it by two until it is in [1, 2). (That is the interval from 1 to 2 that includes 1 but not 2.) The number of times you divided will be the exponent.
If the number is less than one, multiply it by two until it is in [1, 2). The exponent will be the negative of the number of times you multiplied.
If the number is already in [1, 2), the exponent is zero. Convert the number in [1, 2) that you ended up with to binary.
Then round the number to fit in the significand of the floating-point format:
- Round the number to as many bits as the significand allows. If this pushes the number up to two, divide it by two and add one to the exponent.
Now you have the sign, the exponent, and the significand.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install arbitrary-precision
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page