multiple-precision | 長野高専の3J「アルゴリズムとデータ構造」後期の多倍長演算プログラム
kandi X-RAY | multiple-precision Summary
kandi X-RAY | multiple-precision Summary
長野高専の3J「アルゴリズムとデータ構造」後期の多倍長演算プログラム
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of multiple-precision
multiple-precision Key Features
multiple-precision Examples and Code Snippets
Community Discussions
Trending Discussions on multiple-precision
QUESTION
I am following a tutorial on how to work on multiple-precision arithmetic in Python.
In the end I would like to have a numpy
array with floats of arbitrary high precision and I need to inverse that matrix.
Therefore we have:
...ANSWER
Answered 2020-Oct-24 at 11:17I have managed to inverse a matrix with very precise numbers with mpmath
which contains a lot of built-in math functions as well as a matrix class. Thanks for the comments!
QUESTION
I'm working on a native C++/CLI class which performs integer arithmetic with multiple-precision values. Individual integers are represented by arrays of 64-bit unsigned integers. The sign is represented by a boolean value, negative values are stored with their absolute values, not as two's complements. This makes dealing with sign issues much easier. Currently I'm optimizing the multiplication operation. I've already done several optimization rounds, but still my function requires twice the time of the * operator of two .NET BigInteger values, which shows that there's still considerable potential for further optimization.
Before asking for help, let me show you what I've already tried. My first attempt was a naive approach: Multiply pairs of all 64-bit items using an elementary 64-to-128-bit multiplication, and shift/add the results. I don't show the code here, because it was terribly slow. The next attempt was a recursive divide-and-conquer algorithm, which turned out to be much better. In my implementation, both operands are split recursively in the middle, until two 64-bit values remain. These are multiplied yielding a 128-bit result. The collected elementary results are shift/added all the way up the recursion layers to yield the final result. This algorithm probably benefits from the fact that much less 64-to-128-bit elementary products need to be computed, which seems to be the main bottleneck.
So here's my code. The first snippet shows the top-level entry point:
...ANSWER
Answered 2019-Mar-25 at 13:24As we can see in the reference source, BigInteger in .NET uses a fairly slow multiplication algorithm, the usual quadratic time algorithm using 32x32->64 multiplies. But it is written with low overhead: iterative, few allocations, and no calls to non-inlinable ASM procedures. Partial products are added into the result immediately rather than materialized separately.
The non-inlinable ASM procedure can be replaced with the _umul128 intrinsic. The manual carry calculations (both the conditional +1
and determining the output carry) can be replaced by the _addcarry_u64
intrinsic.
Fancier algorithms such as Karatsuba multiplication and Toom-Cook multiplication can be effective, but not when the recursion is done all the way down to the single limb level - that is far past the point where the overhead outweighs the saved elementary multiplications. As a concrete example, this implementation of Java's BigInteger switches to Karatsuba for 80 limbs (2560 bits because they use 32 bit limbs), and to 3-way Toom-Cook for 240 limbs. Given that threshold of 80, with only 64 limbs I would not expect too much gain there anyway, if any.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install multiple-precision
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page