expr | Fast and lightweight math expression evaluator in C99 | Math library
kandi X-RAY | expr Summary
kandi X-RAY | expr Summary
Expr is a mathematical expression evaluator written in C. It takes string as input and returns floating-point number as a result.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of expr
expr Key Features
expr Examples and Code Snippets
Community Discussions
Trending Discussions on expr
QUESTION
TL;DR: I am looking for a C++14 equivalent of the following C++20 MWE:
...ANSWER
Answered 2022-Mar-04 at 07:43Yes. You can SFINAE the conversion operator:
QUESTION
This question is the data.table equivalent of Pass a data.frame column name to a function.
Suppose I have a very simple data.table:
...ANSWER
Answered 2022-Feb-26 at 00:32new_column_byref <- function(df,col_name,expr){
col_name <- deparse(substitute(col_name))
set(df,j=col_name,value=eval(substitute(expr),df,parent.frame()))
}
dat <- data.table(x = 1:4,y = 5:8)
new_column_byref(dat,z,x+y)[]
x y z
1: 1 5 6
2: 2 6 8
3: 3 7 10
4: 4 8 12
QUESTION
With the parent-child
relationships data frame as below:
ANSWER
Answered 2022-Feb-25 at 08:17We can use ego
like below
QUESTION
(Note! This question particularly covers the state of C++14, before the introduction of inline variables in C++17)
TLDR; Question- What constitutes odr-use of a constexpr variable used in the definition of an inline function, such that multiple definitions of the function violates [basic.def.odr]/6?
(... likely [basic.def.odr]/3; but could this silently introduce UB in a program as soon as, say, the address of such a constexpr variable is taken in the context of the inline function's definition?)
TLDR example: does a program where doMath()
defined as follows:
ANSWER
Answered 2021-Sep-08 at 16:34In the OP's example with std::max
, an ODR violation does indeed occur, and the program is ill-formed NDR. To avoid this issue, you might consider one of the following fixes:
- give the
doMath
function internal linkage, or - move the declaration of
kTwo
insidedoMath
A variable that is used by an expression is considered to be odr-used unless there is a certain kind of simple proof that the reference to the variable can be replaced by the compile-time constant value of the variable without changing the result of the expression. If such a simple proof exists, then the standard requires the compiler perform such a replacement; consequently the variable is not odr-used (in particular, it does not require a definition, and the issue described by the OP would be avoided because none of the translation units in which doMath
is defined would actually reference a definition of kTwo
). If the expression is too complicated, however, then all bets are off. The compiler might still replace the variable with its value, in which case the program may work as you expect; or the program may exhibit bugs or crash. That's the reality with IFNDR programs.
The case where the variable is immediately passed by reference to a function, with the reference binding directly, is one common case where the variable is used in a way that is too complicated and the compiler is not required to determine whether or not it may be replaced by its compile-time constant value. This is because doing so would necessarily require inspecting the definition of the function (such as std::max
in this example).
You can "help" the compiler by writing int(kTwo)
and using that as the argument to std::max
as opposed to kTwo
itself; this prevents an odr-use since the lvalue-to-rvalue conversion is now immediately applied prior to calling the function. I don't think this is a great solution (I recommend one of the two solutions that I previously mentioned) but it has its uses (GoogleTest uses this in order to avoid introducing odr-uses in statements like EXPECT_EQ(2, kTwo)
).
If you want to know more about how to understand the precise definition of odr-use, involving "potential results of an expression e...", that would be best addressed with a separate question.
QUESTION
I'm parsing a language that doesn't have statement terminators like ;
. Expressions are defined as the longest sequence of tokens, so 5-5
has to be parsed as a subtraction, not as two statements (literal 5
followed by a unary negated -5
).
I'm using LALRPOP as the parser generator (despite the name, it is LR(1) instead of LALR, afaik). LALRPOP doesn't have precedence attributes and doesn't prefer shift over reduce by default like yacc would do. I think I understand how regular operator precedence is encoded in an LR grammar by building a "chain" of rules, but I don't know how to apply that to this issue.
The expected parses would be (individual statements in brackets):
...ANSWER
Answered 2022-Jan-04 at 06:17The issue you're going to have to confront is how to deal with function calls. I can't really give you any concrete advice based on your question, because the grammar you provide lacks any indication of the intended syntax of functions calls, but the hint that print(5)
is a valid statement makes it clear that there are two distinct situations, which need to be handled separately.
Consider:
QUESTION
In the vec!
macro implementation there is this rule:
ANSWER
Answered 2021-Dec-18 at 21:03Let's go step by step to see how <[_]>::into_vec(box [$($x),+])
produces a Vec
:
[$($x),+]
expands to an array of input elements:[1, 2, 3]
box ...
puts that into aBox
.box
expressions are nightly-only syntax sugar forBox::new
:box 5
is syntax sugar forBox::new(5)
(actually it's the other way around: internallyBox::new
usesbox
, which is implemented in the compiler)<[_]>::into_vec(...)
calls theto_vec
method on a slice containing elements that have an inferred type ([_]
). Wrapping the[_]
in angled brackets is needed for syntactic reasons to call an method on a slice type. Andinto_vec
is a function that takes a boxed slice and produces aVec
:
QUESTION
I was looking at the vec![]
macro implementation in Rust and noticed it uses the __rust_force_expr!
macro. This is the implementation of the latter:
ANSWER
Answered 2021-Dec-18 at 13:05It doesn't have any result on how the macro is used, it only serves to improve the quality of error messages when the macro is used incorrectly by telling the compiler that the output of the macro is always a single expression, not an item or multiple expressions.
The specific error that this was added to improve was for using vec![]
in a pattern match, which is invalid (you can't structually match on a Vec
):
QUESTION
I have downloaded the street abbreviations from USPS. Here is the data:
...ANSWER
Answered 2021-Nov-03 at 10:26Here is the benchmarking for the existing to OP's question (borrow test data from @Marek Fiołka but with n <- 10000
)
QUESTION
Note 2 to [expr.const]/2 implies that if we have a variable o
such that:
the full-expression of its initialization is a constant expression when interpreted as a constant-expression, except that if
o
is an object, that full-expression may also invoke constexpr constructors foro
and its subobjects even if those objects are of non-literal class types
then:
Within this evaluation,
std::is_constant_evaluated()
[...] returnstrue
.
Consider:
...ANSWER
Answered 2021-Sep-17 at 23:18The full quote here is
A variable or temporary object
o
is constant-initialized if
- (2.1) either it has an initializer or its default-initialization results in some initialization being performed, and
- (2.2) the full-expression of its initialization is a constant expression when interpreted as a constant-expression, except that if
o
is an object, that full-expression may also invoke constexpr constructors foro
and its subobjects even if those objects are of non-literal class types. [Note 2: Such a class can have a non-trivial destructor. Within this evaluation,std::is_constant_evaluated()
([meta.const.eval]) returnstrue
. — end note]
The tricky bit here is that the term "is constant-initialized" (note: not "has constant initialization") doesn't mean anything by itself (it probably should renamed to something else). It's used in exactly three other places, two of which I'll quote below, and the last one ([dcl.constexpr]/6) isn't really relevant.
[expr.const]/4:
A constant-initialized potentially-constant variable
V
is usable in constant expressions at a pointP
ifV
's initializing declaration D is reachable from P and [...].
[basic.start.static]/2:
Constant initialization is performed if a variable or temporary object with static or thread storage duration is constant-initialized ([expr.const]).
Let's replace "constant-initialized" with something less confusing, like "green".
So
- A green potentially-constant variable is usable in constant expressions if [some conditions are met]
- Constant initialization is performed if a variable or temporary object with static or thread storage duration is green.
Outside of these two cases, the greenness of a variable doesn't matter. You can still compute whether it is green, but that property has no effect. It's an academic exercise.
Now go back to the definition of greenness, which says that a variable or temporary object is green if (among other things) "the full-expression of its initialization is a constant expression when interpreted as a constant-expression" with some exceptions. And the note says that during this hypothetical evaluation to determine the green-ness of the variable, is_constant_evaluated()
returns true
- which is entirely correct.
So going back to your example:
QUESTION
I want to convert RAW image data (RGGB) to sRGB image. There are many specialized ways to do this but to first understand the basics, I've implemented some easy alogrithms like debayering by resolution-reduction. My current pipeline is:
- Rescale the u16 input data by blacklevel and whitelevel
- Apply white balance coefficents
- Debayer with size reduction, average for G: g=((g0+g1)/2)
- Calculate pseudo-inverse for D65 illuminant XYZ_TO_CAM (from Adobe DNG)
- Convert debayered RGB data to XYZ by CAM_TO_XYZ
- Convert XYZ to D65 sRGB (matrix taken from Bruce Lindbloom)
- Apply gamma correction (simple routine for now, should be replaced by sRGB gamma)
- Rescale from [minval..maxval] to [0..1] and convert f32 to u16
- Save as tiff
The problem is that if I skip the white balance coefficent multiplication (or just replace them by 1.0) the output image already looks acceptable. If I apply the coefficents (taken from AsShot in DNG) the output has a huge color cast. And I'm not sure if I have to multiply by coef or 1/coef.
The first image is the result of the pipeline with wb_coefs set to 1.0.
The second image is the result with the "correct" wb_coefs.
What is wrong in my pipeline?
Additional question:
- I'm not sure about the rescaling process. Do I've to rescale into [0..1] after every step or is it enough to rescale during u16 conversion as final stage?
Full code:
...ANSWER
Answered 2021-Aug-17 at 10:40The main reason for getting wrong colors is that we have to normalize the rows of rgb2cam
matrix to 1
, as described in the following guide.
According to DNG spec:
ColorMatrix1 defines a transformation matrix that converts XYZ values to reference camera native color space values, under the first calibration illuminant.
It means that if the calibration illuminant is D65, the ColorMatrix converts XYZ to "camera RGB".
(Convert it as is, without using any white balance scaling coefficients).
- The inverse ColorMatrix, converts from "camera RGB" to XYZ.
After converting XYZ to sRGB, the result is color balanced sRGB.
The conclusions is that ColorMatrix includes the while balance coefficients in it (the white balancing coefficients apply D65 illuminant). - Normalizing the rows of
rgb2cam
to1
neutralizes the while balance coefficients, and keeps only the "Color Correction Matrix" (the math is a bit complicated). - Without normalizing the rows, we are scaling by while balance multipliers two times:
- Scale coefficients from ColorMatrix that balances the input to D65.
- Scale coefficients taken from AsShotNatural that balances the input to the illuminant of the scene (illuminant of the scene is close to D65).
The result of scaling twice is an extreme color cast.
Tracking the maximum in order to avoid "magenta cast in the highlights":
Instead of tracking the actual maximum color values in the input image, we suppose to track the "theoretical maximum color value".
- Take
whitelevel - blacklevel
and scale by the white balance multipliers.
Track the result...
The guiding rule is that the colors supposed to be the same in both cases:
- Applying the processing to small patches of the image, and places the patches together (where we can't track the global minimum and maximum).
- Applying the processing to the entire image.
I suppose you have to track the maximum of scaled whitelevel - blacklevel
, only when white balance multipliers are less than 1
.
When all the multipliers are 1
or above, we can clip the result to 1.0
, without tracking the maximum.
Note:
- there is probably an advantage of scaling down, and tracking the maximum, but I don't know this subject.
In my solution we just multiply upper (above 1.0), and clip the result.
The solution is based on Processing RAW Images in MATLAB guide.
I am posting both MATLAB implementation and Python implementation (but no Rust implementation).
The first step is extracting the raw Bayer image from sample.dng
using dcraw command line:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install expr
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page