# eigen | linear algebra : matrices , vectors , numerical solvers | Math library

## kandi X-RAY | eigen Summary

## kandi X-RAY | eigen Summary

For more information go to

### Support

### Quality

### Security

### License

### Reuse

### Top functions reviewed by kandi - BETA

Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of eigen

## eigen Key Features

## eigen Examples and Code Snippets

## Community Discussions

Trending Discussions on eigen

QUESTION

Say I have an MxN matrix (SIG) and a list of Nx1 fractional indices (idxt). Each fractional index in idxt uniquely corresponds to the same position column in SIG. I would like to index to the appropriate value in SIG using the indices stored in idxt, take that value and save it in another Nx1 vector. Since the indices in idxt are fractional, I need to interpolate in SIG. Here is an implementation that uses linear interpolation:

...ANSWER

Answered 2022-Jan-19 at 18:55Theoretically, it *should be possible*, assuming the processor support this operation. However, in practice, this is not the case for many reasons.

First of all, mainstream x86-64 processors supporting the instruction set AVX-2 (or AVX-512) does have instructions for that: gather SIMD instructions. Unfortunately, *the instruction set is quite limited*: you can only fetch 32-bit/64-bit values from the memory base on 32-bit/64-bit indices. Moreover, this instruction is not very efficiently implemented on mainstream processors yet. Indeed, it fetch every item separately which is not faster than a scalar code, but this can still be useful if the rest of the code is vectorized since reading many scalar value to fill a SIMD register manually tends to be a bit less efficient (although it was surprisingly faster on old processors due to a quite inefficient early implementation of gather instructions). Note that is the `SIG`

matrix is big, then *cache misses* will significantly slow down the code.

Additionally, AVX-2 is not enabled by default on mainstream processors because not all x86-64 processors supports it. Thus, you need to *enable AVX-2* (eg. using `-mavx2`

) so compilers could vectorize the loop efficiently. Unfortunately, this is not enough. Indeed, most compilers *currently fail to automatically detect* when this instruction can/should be used. Even if they could, then the fact that IEEE-754 floating point number operations are not associative and values can be infinity or NaN generally does not help them to generate an efficient code (although it should be fine here). Note that you can tell to your compiler that operations can be assumed associated and you use only finite/basic real numbers (eg. using `-ffast-math`

, which can be unsafe). The same thing apply for Eigen type/operators if compilers fail to completely inline all the functions (which is the case for ICC).

To speed up the code, you can try to change the type of the `SIG`

variable to a matrix reference containing `int32_t`

items. Another possible optimization is to split the loop in small fixed-size chunks (eg.32 items) and split the loop in many parts so to compute the indirection in a separate loops so compilers can vectorize at least some of the loops. Some compilers likes Clang are able to do that automatically for you: they generate a fast SIMD implementation for a part of the loop and do the indirections use scalar instructions. If this is not enough (which appear to be the case so far), then you certainly need to vectorize the loop yourself using SIMD intrinsics (or possible use SIMD libraries that does that for you).

QUESTION

**This is what I have for the plot:**

ANSWER

Answered 2022-Feb-06 at 11:27Finally, I put this to the side for some time when I got more R savvy. Instead of trying to overcomplicate things, I decided to make a really simple SEM path plot, then apply what was said in the comments here earlier to solve the issue.

SolutionSo the major issue I kept having was getting the title to map on. For some reason I couldn't understand what was causing the issue...until I figured out the order of operations for printing out the plot. So here is basically what I did. First I used a well-oiled data frame and wrote a model based off the old lavaan manual:

QUESTION

I have a vector `x`

of type `Eigen::VectorXi`

with more than 2^31-1 entries, which I would like to return to R. I can do that by copying `x`

entry-wisely to a new vector of type `Rcpp::IntegerVector`

, but that seems to be quite slow.

I am wondering:

- whether there is a more efficient workaround;
- why in the following reproducible example
`Rcpp::wrap(x)`

doesn't work.

test.cpp

...ANSWER

Answered 2022-Jan-16 at 16:36`Rcpp::wrap`

is dispatching to a method for Eigen matrices and vectors implemented in `RcppEigen`

. That method doesn't appear to support long vectors, currently. (**Edit:** It now *does*; see below.)

The error about negative length is thrown by `allocVector3`

here. It arises when `allocVector3`

is called with a negative value for its argument `length`

. My *guess* is that `Rcpp::wrap`

tries to represent `2^31`

as an `int`

, resulting in integer overflow. Maybe this happens here?

In any case, you seem to have stumbled on a bug, so you might consider sharing your example with the `RcppEigen`

maintainers on GitHub. (**Edit:** Never mind - I've just submitted a patch.) (**Edit:** Patched now, if you'd like to build `RcppEigen`

from sources [commit 5fd125e or later] in order to update your `Rcpp::wrap`

.)

Attempting to answer your first question, I compared your two approaches with my own based on `std::memcpy`

. The `std::memcpy`

approach supports long vectors and is only slightly slower than `Rcpp::wrap`

.

`std::memcpy`

approach
The C arrays beneath `Eigen::VectorXi x`

and `Rcpp::IntegerVector y`

have the same type (`int`

) and length (`n`

), so they contain the same number of bytes. You can use `std::memcpy`

to copy that number of bytes from one's memory address to other's without a `for`

loop. The hard part is knowing how to obtain the addresses. `Eigen::VectorXi`

has a member function `data`

that returns the address of the underlying `int`

array. R objects of integer type use `INTEGER`

from the R API, which does the same thing.

QUESTION

Here it is written that `std::ranges::size`

should return an unsigned integer. However, when I use it on an *Eigen* vector (with Eigen 3.4) the following compiles:

ANSWER

Answered 2021-Dec-22 at 19:15Is this a bug of std::ranges::size?

No. The cppreference documentation is misleading. There is no requirement for `std::ranges::size`

to return an unsigned integer. In this case, it returns exactly what `Eigen::VectorXd::size`

returns.

For ranges that model `ranges::sized_range`

, that would be an unsigned integer, but `Eigen::VectorXd`

evidently does not model such range.

But then what is the purpose of std::ranges::ssize compared to std::ranges::size?

The purpose of `std::ranges::ssize`

is to be a generic way to get a signed value regardless of whether `std::ranges::size`

returns signed or unsigned. There is no difference between them in cases where `std::ranges::size`

returns a signed type.

Is there a reference to back up what you state?

Yes. See the C++ standard:

[range.prim.size]

Otherwise, if

`disable_sized_range>`

([range.sized]) is`false`

and`auto(t.size())`

is a valid expression of integer-like type ([iterator.concept.winc]),`ranges::size(E)`

is expression-equivalent to`auto(t.size())`

.

QUESTION

Why does `Eigen::VectorXd`

not satisfy the concept `std::ranges::contiguous_range`

? That is, `static_assert(std::ranges::contiguous_range);`

does not compile.

Also, is there the possibility to specialize a template to make *Eigen* vectors satisfy the contiguous range concept? For instance, we can specialize `std::ranges::enable_borrowed_range`

to make any range satisfy the `std::range::borrowed_range`

concept. In other words, is there a way to make the above static assertion compile?

ANSWER

Answered 2021-Dec-21 at 22:48Contiguous ranges have to be opted into. There is no way to determine just by looking at an iterator whether or not it is contiguous or *just* random access. Consider the difference between `deque::iterator`

and `vector::iterator`

- they provide all the same operations that return all the same things, how would you know unless the `vector::iterator`

explicitly told you?

Eigen's iterators do not do this yet. Indeed, before C++20 there was no notion of a contiguous iterator to begin with. That's new with C++20 Ranges.

You can see this if you try to just verify that it is contiguous:

QUESTION

I came around some weird behavior concerning the eigen library and templated functions.

Maybe someone can explain to me, why the first version is not working, while the other 3 do. My guess would be the first case freeing up some local variable, but hopefully someone can enlighten me. Thanks in advance.

Here is the code:

Compiler-Explorer: https://compiler-explorer.com/z/r45xzE417

...ANSWER

Answered 2021-Dec-14 at 15:27In the first version,

QUESTION

I have a C++ function which I want to run from Python. For this I use Cython. My C++ function relies heavily on Eigen matrices which I map to Python's Numpy matrices using Eigency.

I cannot get this to work for the case where I have a list of Numpy matrices.

**What does works (mapping a plain Numpy matrix to an Eigen matrix):**

I have a C++ function which in the header (Header.h) looks like:

...ANSWER

Answered 2021-Dec-01 at 18:34Thanks to @ead I found a solution.

`FlattenedMapWithOrder`

has implementation so it can be assinged to an `Eigen::Matrix`

.
However, `std::vector`

does not have such functionality and since `std::vector`

and `std::vector`

are of a different type, they cannot be assigned to one another.
More about this here.
The implementation in `FlattenedMapWithOrder`

mentioned above is here.

To solve this, the function in the C++ code called from Cython need to simply have as input argument the matching type: `std::vector`

.
To do this, the C++ code needs to know the definition of type `FlattenedMapWithOrder`

.

To do this, you need to `#include "eigency_cpp.h"`

. Unfortunately, this header is not self contained.
Therefore, (credits to @ead) I added these lines:

QUESTION

I want to test and compare Numpy matrix multiplication and Eigen decomposition performance with Intel MKL and without Intel MKL.

I have installed MKL using `pip install mkl`

(Windows 10 (64-bit), Python 3.8).

I then used examples from here for matmul and eigen decompositions.

How do I now enable and disable MKL in order to check numpy performance with MKL and without it?

Reference code:

...ANSWER

Answered 2021-Nov-25 at 12:30You can use different environments for the comparison of Numpy with and without MKL. In each environment you can install the needed packages(numpy with MKL or without) using package installer. Then on that environments you can run your program to compare the performance of Numpy with and without MKL.

NumPy doesn’t depend on any other Python packages, however, it does depend on an accelerated linear algebra library - typically Intel MKL or OpenBLAS.

The NumPy wheels on PyPI, which is what pip installs, are built with OpenBLAS.

In the conda defaults channel, NumPy is built against Intel MKL. MKL is a separate package that will be installed in the users' environment when they install NumPy.

When a user installs NumPy from conda-forge, that BLAS package then gets installed together with the actual library.But it can also be MKL (from the defaults channel), or even BLIS or reference BLAS.

Please refer this link to know about installing Numpy in detail.

You can create two different environments to compare the NumPy performance with MKL and without it. In the first environment install the stand-alone NumPy (that is, the NumPy without MKL) and in the second environment install the one with MKL.

**To create environment using NumPy without MKL.**

QUESTION

I am not good with CMake, and I cannot find good explanations about how to use its `FetchContent`

functionality. Indeed, most repositories seem to require different treatment, and the rules of such treatment defy my comprehension.

That said, here is my problem. I would like to use *CppADCodeGen* in my project using CMake `FetchContent`

. Here is my code:

ANSWER

Answered 2021-Oct-26 at 20:48As seen in the output you've provided, there are 2 problems:

- There is a target name conflict between probably
`CppAD`

and`eigen`

. They both have the`uninstall`

target. It can be seen here:

QUESTION

I know that this 'eigen speed-up' questions arise regularly but after reading many of them and trying several flags I cannot get a better time with c++ eigen comparing with the traditional way of performing a transpose. Actually using blocking is much more efficient. The following is the code

...ANSWER

Answered 2021-Nov-21 at 11:50As suggested by INS in the comment is the actual copying of the matrix causing the performance drop, I slightly modify your example to use some numbers instead of all zeros (to avoid any type of optimisation):

Community Discussions, Code Snippets contain sources that include Stack Exchange Network

## Vulnerabilities

No vulnerabilities reported

## Install eigen

## Support

## Reuse Trending Solutions

Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

Find more librariesStay Updated

Subscribe to our newsletter for trending solutions and developer bootcamps

Share this Page