float | GAP package for floating-point support in GAP | Math library
kandi X-RAY | float Summary
kandi X-RAY | float Summary
This is the README file for the GAP package "Float". This package implements floating-point numbers, with arbitrary precision, based on the C libraries MPFR, MPFI, MPC, FPLLL and CXSC. The package is distributed in source form, and does not require anything else than a running GAP 4.7 or later. For updates, check the package website at
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of float
float Key Features
float Examples and Code Snippets
apt-get install libmpfr-dev libmpfi-dev libmpc-dev libfplll-dev
brew install mpfr mpfi libmpc fplll
def celsius_to_fahrenheit(celsius: float, ndigits: int = 2) -> float:
"""
Convert a given value from Celsius to Fahrenheit and round it to 2 decimal places.
Wikipedia reference: https://en.wikipedia.org/wiki/Celsius
Wikipedia refer
function floatAsBinaryString(floatNumber, byteLength) {
let numberAsBinaryString = '';
const arrayBuffer = new ArrayBuffer(byteLength);
const dataView = new DataView(arrayBuffer);
const byteOffset = 0;
const littleEndian = false;
if (b
def _scalar(tf_fn, x, promote_to_float=False):
"""Computes the tf_fn(x) for each element in `x`.
Args:
tf_fn: function that takes a single Tensor argument.
x: array_like. Could be an ndarray, a Tensor or any object that can be
conv
Community Discussions
Trending Discussions on float
QUESTION
It seems that JDK 8 and JDK 13 have different floating points.
I get on JDK 8, using Math:
ANSWER
Answered 2022-Mar-20 at 18:16This seems to be caused by a JVM intrinsic function for Math.cos
, which is described in the related issue JDK-8242461. The behavior experienced there is not considered an issue:
The returned results reported in this bug are indeed adjacent floating-point values [this is the case here as well]
[...]
Therefore, while it is possible one or the other of the returned values is outside of the accuracy bounds, just have different return values for Math.cos is not in and of itself evidence of a problem.
For reproducible results, use the StrictMath.cos instead.
And indeed, disabling the intrinsics using -XX:+UnlockDiagnosticVMOptions -XX:DisableIntrinsic=_dcos
(as proposed in the linked issue), causes Math.cos
to have the same (expected) result as StrictMath.cos
.
So it appears the behavior you are seeing here is most likely compliant with the Math
documentation as well.
QUESTION
I've created a new Java project in IntelliJ with Gradle that uses Java 17. When running my app it has the error Cause: error: invalid source release: 17
.
My Settings
I've installed openjdk-17
through IntelliJ
and set it as my Project SDK
.
The Project language level
has been set to 17 - Sealed types, always-strict floating-point semantics
.
In Modules -> Sources
I've set the Language level
to Project default (17 - Sealed types, always strict floating-point semantics)
.
In Modules -> Dependencies
I've set the Module SDK
to Project SDK openjdk-17
.
In Settings -> Build, Execution, Deployment -> Compiler -> Java Compiler
I've set the Project bytecode version
to 17
.
Gradle
...ANSWER
Answered 2021-Oct-24 at 14:23The message typically entails that your JAVA_HOME environment variable points to a different Java version.
Here are the steps to follow:
- Close IntelliJ IDEA
- Open a terminal window and check your JAVA_HOME variable value:
- *nix system:
echo $JAVA_HOME
- Windows system:
echo %JAVA_HOME%
- *nix system:
- The JAVA_HOME path should be pointing to a different path, then set it to the openjdk-17 path:
- *nix system:
export JAVA_HOME=/path/to/openjdk-17
- Windows system:
set JAVA_HOME=path\to\openjdk-17
- *nix system:
- Open your project again in IntelliJ IDEA
- Make sure to set both source and target compatibility versions (not only the
sourceCompatibility
)
You should be able to build your project.
EDIT: Gradle ToolchainYou may need also to instruct Gradle to use a different JVM than the one it uses itself by setting the Java plugin toolchain to your target version:
QUESTION
Is there a way to obtain the greatest value representable by the floating-point type float
which is smaller than 1
.
ANSWER
Answered 2022-Mar-08 at 23:51You can use the std::nextafter
function, which, despite its name, can retrieve the next representable value that is arithmetically before a given starting point, by using an appropriate to
argument. (Often -Infinity
, 0
, or +Infinity
).
This works portably by definition of nextafter
, regardless of what floating-point format your C++ implementation uses. (Binary vs. decimal, or width of mantissa aka significand, or anything else.)
Example: Retrieving the closest value less than 1 for the double
type (on Windows, using the clang-cl compiler in Visual Studio 2019), the answer is different from the result of the 1 - ε
calculation (which as discussed in comments, is incorrect for IEEE754 numbers; below any power of 2, representable numbers are twice as close together as above it):
QUESTION
I have an array of positive integers. For example:
...ANSWER
Answered 2022-Feb-27 at 22:44This problem has a fun O(n) solution.
If you draw a graph of cumulative sum vs index, then:
The average value in the subarray between any two indexes is the slope of the line between those points on the graph.
The first highest-average-prefix will end at the point that makes the highest angle from 0. The next highest-average-prefix must then have a smaller average, and it will end at the point that makes the highest angle from the first ending. Continuing to the end of the array, we find that...
These segments of highest average are exactly the segments in the upper convex hull of the cumulative sum graph.
Find these segments using the monotone chain algorithm. Since the points are already sorted, it takes O(n) time.
QUESTION
Let's say we have these 3 classes:
...ANSWER
Answered 2022-Jan-13 at 21:04It is ambiguous because of two reasons:
- both overloads are applicable, and;
- neither overload is more specific than the other
Notice that both the f(int, A)
overload and the f(float, B)
overload can be called with the parameters (i, b)
, since there is an implicit conversion from int
to float
, and an implicit conversion from B
to A
.
What happens when there are more than one applicable method? Java is supposed to choose the most specific method. This is described in §15.12.2.5 of the language spec. It turns out that it is not the case that one of these overloads are more specific than the other.
One applicable method m1 is more specific than another applicable method m2, for an invocation with argument expressions e1, ..., ek, if any of the following are true:
m2 is generic [...]
m2 is not generic, and m1 and m2 are applicable by strict or loose invocation, and where m1 has formal parameter types S1, ..., Sn and m2 has formal parameter types T1, ..., Tn, the type Si is more specific than Ti for argument ei for all i (1 ≤ i ≤ n, n = k).
m2 is not generic, and m1 and m2 are applicable by variable arity invocation [...]
Only the second point applies to the two overloads of f
. For one of the overloads to be more specific than the other, every parameter type of one overload has to be more specific than the corresponding parameter type in the other overload.
A type S is more specific than a type T for any expression if S <: T (§4.10).
Note that"<:" is the subtyping relationship. B
is clearly a subtype of A
. float
is actually a supertype (not subtype!) of int
. This can be derived from the direct subtyping relations listed in §4.10.1. Therefore, neither of the overloads is more specific than the other.
The language spec goes on to talk about maximally specific methods, which doesn't really apply to f
here. Finally, it says:
More ExamplesOtherwise, the method invocation is ambiguous, and a compile-time error occurs.
QUESTION
I am trying to understand overloading resolution in C++ through the books listed here. One such example that i wrote to clear my concepts whose output i am unable to understand is given below.
...ANSWER
Answered 2022-Jan-25 at 17:19Essentially, skipping over some stuff not relevant in this case, overload resolution is done to choose the user-defined conversion function to initialize the variable and (because there are no other differences between the conversion operators) the best viable one is chosen based on the rank of the standard conversion sequence required to convert the return value of to the variable's type.
The conversion int -> double
is a floating-integral conversion, which has rank conversion.
The conversion float -> double
is a floating-point promotion, which has rank promotion.
The rank promotion is better than the rank conversion, and so overload resolution will choose operator float
as the best viable overload.
The conversion int -> long double
is also a floating-integral conversion.
The conversion float -> long double
is not a floating-point promotion (which only applies for conversion float -> double
). It is instead a floating-point conversion which has rank conversion.
Both sequences now have the same standard conversion sequence rank and also none of the tie-breakers (which I won't go through) applies, so overload resolution is ambigious.
The conversion int -> bool
is a boolean conversion which has rank conversion.
The conversion float -> bool
is also a boolean conversion.
Therefore the same situation as above arises.
See https://en.cppreference.com/w/cpp/language/overload_resolution#Ranking_of_implicit_conversion_sequences and https://en.cppreference.com/w/cpp/language/implicit_conversion for a full list of the conversion categories and ranks.
Although it might seem that a conversion between floating-point types should be considered "better" than a conversion from integral to floating-point type, this is generally not the case.
QUESTION
In the following code struct A
has two implicit conversion operators to char
and int
, and an instance of the struct is compared for equality against integer constant 2
:
ANSWER
Answered 2022-Jan-21 at 19:00This is CWG 507. An example similar to yours was given, and the submitter explained that according to the standard, the overload resolution is ambiguous, even though this result is very counter-intuitive.
Translating to your particular example, when comparing operator==(int, int)
and operator==(float, int)
to determine which is the better candidate, we have to determine which one has the better implicit conversion sequence for the first argument (obviously in the second argument, no conversion is required). For the first argument of operator==(int, int)
, we just use A::operator int
. For the first argument of operator==(float, int)
, there is no way to decide whether to use A::operator int
or A::operator char
, so we get the "ambiguous conversion sequence". The overload resolution rules say that the ambiguous conversion sequence is no better or worse than any other user-defined conversion sequence. Therefore, the straightforward conversion from A{}
to int
(via A::operator int
) is not considered better than the ambiguous conversion from A{}
to float
. This means neither operator==
candidate is better than the other.
Clang is apparently following the letter of the standard whereas GCC and MSVC are probably doing something else because of the standard seeming to be broken here. "Which compiler is right" depends on your opinion about what the standard should say. There is no proposed resolution on the issues page.
I would suggest removing operator char
unless you really, really need it, in which case you will have to think about what else you're willing to give up.
QUESTION
Assuming I want to write a function that accepts any type of number in Python, I can annotate it as follows:
...ANSWER
Answered 2021-Sep-29 at 20:20There is no general way to do this. Numbers are not strictly related to begin with and their types are even less.
While numbers.Number
might seem like "the type of numbers" it is not universal. For example, decimal.Decimal
is explicitly not a numbers.Number
as either subclass, subtype or virtual subclass. Specifically for typing, numbers.Number
is not endorsed by PEP 484 -- Type Hints.
In order to meaningfully type hint "numbers", one has to explicitly define what numbers are in that context. This might be a pre-existing numeric type set such as int
<: float
<: complex
, a typing.Union
/TypeVar
of numeric types, a typing.Protocol
to define operations and algebraic structure, or similar.
QUESTION
In Haskell (at least with GHC v8.8.4), being in the Num
class does NOT imply being in the Eq
class:
ANSWER
Answered 2021-Sep-01 at 00:20Integer literals can't be compared without using Eq
. But that's not what is happening, either.
In GHCi, under NoMonomorphismRestriction
(which is default in GHCi nowadays; not sure about in GHC 8.8.4) x = 42
results in a variable x
of type forall p :: Num p => p
.1
Then you do y = 43
, which similarly results in the variable y
having type forall q. Num q => q
.2
Then you enter x == y
, and GHCi has to evaluate in order to print True
or False
. That evaluation cannot be done without picking a concrete type for both p
and q
(which has to be the same). Each type has its own code for the definition of ==
, so there's no way to run the code for ==
without deciding which type's code to use.3
However each of x
and y
can be used as any type in Num
(because they have a definition that works for all of them)4. So we can just use (x :: Int) == y
and the compiler will determine that it should use the Int
definition for ==
, or x == (y :: Double)
to use the Double
definition. We can even do this repeatedly with different types! None of these uses change the type of x
or y
; we're just using them each time at one of the (many) types they support.
Without the concept of defaulting, a bare x == y
would just produce an Ambiguous type variable
error from the compiler. The language designers thought that would be extremely common and extremely annoying with numeric literals in particular (because the literals are polymorphic, but as soon as you do any operation on them you need a concrete type). So they introduced rules that some ambiguous type variables should be defaulted to a concrete type if that allows compilation to continue.5
So what is actually happening when you do x == y
is that the compiler is just picking Integer
to use for x
and y
in that particular expression, because you haven't given it enough information to pin down any particular type (and because the defaulting rules apply in this situation). Integer
has an Eq
instance so it can use that, even though the most general types of x
and y
don't include the Eq
constraint. Without picking something it couldn't possibly even attempt to call ==
(and of course the "something" it picks has to be in Eq
or it still won't work).
If you turn on -Wtype-defaults
(which is included in -Wall
), the compiler will print a warning whenever it applies defaulting6, which makes the process more visible.
1 The forall p
part is implicit in standard Haskell, because all type variables are automatically introduced with forall
at the beginning of the type expression in which they appear. You have to turn on extensions to even write the forall
manually; either ExplicitForAll
just for the ability to write forall
, or any one of the many extensions that actually add functionality that makes forall
useful to write explicitly.
2 GHCi will probably pick p
again for the type variable, rather than q
. I'm just using a different one to emphasise that they're different variables.
3 Technically it's not each type that necessarily has a different ==
, but each Eq
instance. Some of those instances are polymorphic, so they apply to multiple types, but that only really comes up with types that have some structure (like Maybe a
, etc). Basic types like Int
, Integer
, Double
, Char
, Bool
, each have their own instance, and each of those instances has its own code for ==
.
4 In the underlying system, a type like forall p. Num p => p
is in fact much like a function; one that takes a Num
instance for a concrete type as a parameter. To get a concrete value you have to first "apply the function" to a type's Num
instance, and only then do you get an actual value that could be printed, compared with other things, etc. In standard Haskell these instance parameters are always invisibly passed around by the compiler; some extensions allow you to manipulate this process a little more directly.
This is the root of what's confusing about why x == y
works when x
and y
are polymorphic variables. If you had to explicitly pass around the type/instance arguments it would be obvious what's going on here, because you would have to manually apply both x
and y
to something and compare the results.
5 The gist of the default rules is that if the constraints on an ambiguous type variable are:
- all built-in classes
- at least one of them is a numeric class (
Num
,Floating
, etc)
then GHC will try Integer
to see if that type checks and allows all other constraints to be resolved. If that doesn't work it will try Double
, and if that doesn't work then it reports an error.
You can set the types it will try with a default
declaration (the "default default" being default (Integer, Double)
), but you can't customise the conditions under which it will try to default things, so changing the default types is of limited use in my experience.
GHCi however comes with extended default rules that are a bit more useful in an interpreter (because it has to do type inference line-by-line instead of on the whole module at once). You can turn those on in compiled code with ExtendedDefaultRules
extension (or turn them off in GHCi with NoExtendedDefaultRules
), but again, neither of those options is particularly useful in my experience. It's annoying that the interpreter and the compiler behave differently, but the fundamental difference between module-at-a-time compilation and line-at-a-time interpretation mean that switching either's default rules to work consistently with the other is even more annoying. (This is also why NoMonomorphismRestriction
is in effect by default in the interpreter now; the monomorphism restriction does a decent job at achieving its goals in compiled code but is almost always wrong in interpreter sessions).
6 You can also use a typed hole in combination with the asTypeOf
helper to get GHC to tell you what type it's inferring for a sub-expression like this:
QUESTION
I am confused about these two structures from different tutorials:
...ANSWER
Answered 2021-Sep-13 at 09:07With the first you can use either the type-alias COMPLEX
or struct complex
.
With the second you have an anonymous structure which can only be used with the type-alias COMPLEX
.
With that said, in C++ any structure name is also a type-name and can be used as a type directly:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install float
--with-mpfr=<location>
--with-mpfi=<location>
--with-mpc=<location>
--with-fplll=<location>
--with-cxsc=<location>
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page