microbenchmark | accurately measure and compare the execution time | Document Editor library
kandi X-RAY | microbenchmark Summary
kandi X-RAY | microbenchmark Summary
Infrastructure to accurately measure and compare the execution time of R expressions.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of microbenchmark
microbenchmark Key Features
microbenchmark Examples and Code Snippets
Community Discussions
Trending Discussions on microbenchmark
QUESTION
So I was really ripping my hair out why two different sessions of R with the same data were producing wildly different times to complete the same task.
After a lot of restarting R, cleaning out all my variables, and really running a clean R, I found the issue: the new data structure provided by vroom
and readr
is, for some reason, super sluggish on my script. Of course the easiest thing to solve this is to convert your data into a tibble as soon as you load it in. Or is there some other explanation, like poor coding praxis in my functions that can explain the sluggish behavior? Or, is this a bug with recent updates of these packages? If so and if someone is more experienced with reporting bugs to tidyverse, then here is a repex
showing the behavior cause I feel that this is out of my ballpark.
ANSWER
Answered 2021-Jun-15 at 14:37This is the issue I had in mind. These problems have been known to happen with vroom, rather than with the spec_tbl_df
class, which does not really do much.
vroom
does all sorts of things to try and speed reading up; AFAIK mostly by lazy reading. That's how you get all those different components when comparing the two datasets.
With vroom:
QUESTION
Say you have a vector of letters and you want to return a numeric index of that vector e.g.
c("a","b","c","c","d","a","b")
would return c(1,2,3,3,4,1,2)
Is there a faster method than the below function 'index'?
...ANSWER
Answered 2021-Jun-10 at 04:44Marginally faster
QUESTION
how to config the jmh in IDEA
the tutorial I followed: https://www.baeldung.com/java-microbenchmark-harness
I use the JDK 15 ,after I import relative dependency
still got a error
But I heard that after jdk12, jmh has been integrated into jdk. How to fix this ,or recommend me another tutorial,thanks...
...ANSWER
Answered 2021-May-19 at 15:42You pasted the dependencies into the wrong section. You need to crate dependencies
tag and paste the dependencies there:
QUESTION
The computation time for the following function is very high. Is there any room for improvement? Should I be accessing the elements of matrix X
differently? I appreciate any comments or suggestions.
ANSWER
Answered 2021-May-05 at 19:46It might be helpful if you provide at least some explanation of what it is you are doing.
First, I recommend breaking up the code into very small chunks, microbenchmarking each chunk, and finding bottleneck operations. This is better than throwing the whole function at the wall all at once.
row vs. column accessjoin_rows(X,Xt)
This is a very slow operation, especially because you are adding a rowvec
to a column-oriented mat
. Armadillo matrices are stored as a vector in column-major order, so behind the scenes Armadillo is calling push_back
in n non-contiguous locations, where n is the number of columns.
It seems you may be able to avoid this altogether, as finalX.row(i)
is the only call that depends on finalX
in this loop. Just figure out what .row(i)
is.
You do a lot of transposing, maybe you should be working with a transposed matrix from the get-go?
Xi.t()
is called twice in the same line:
finalfirst -= (Xi.t() - rat.col(j)) * exp(arma::as_scalar(betas*Xi.t()));
Transposing a row vector is pretty pointless, just initialize it as a column vector and iterate through it with a good old-fashioned loop. A little more code isn't always a bad thing, it often makes your intent more explicit too.
Copying vs. in-place operationsThis is a copy:
final.col(i) = finalfirst;
Why not operate in-memory and update final(_, i)
in-place rather than using a temporary finalfirst
followed by a deep copy into your target matrix? This is a column, so memory is contiguous and the compiler will be able to optimize the access pattern for you as wonderfully as if you were working on any simple vector.
All said, I haven't fully wrapped my head around what it is exactly that you are doing, but @largest_prime_is_463035818 may be right about switching the for(i...
) and for(j...)
loops around. It appears you will be able to pull these two rows out of the nested loop:
QUESTION
I am looking for an efficient way of doing this:
Given a vector x
(you may assume the values are sorted):
ANSWER
Answered 2021-Apr-28 at 00:46One way could be:
QUESTION
Here's what I want to do: I have two matrices A and B of dimensions N x k1 and N x k2. I now want to pointwise multiply each column of the matrix A with B.
Implementation one does this in a for loop.
For speed optimization purposes, I considered to vectorize the entire operation - but it turns out vectorization (as I have implemented it, via kronecker products) did not improve my runtime for larger problems.
Does anyone have a suggestion how to differently implement this operation, having runtime in mind?
The code below starts with a small example, then implements a loop-based and vectorized solution, then benchmarks on a larger problem.
...ANSWER
Answered 2021-Apr-26 at 13:30You can try apply(A, 2, '*', B)
and to come the the same like colmat_prod
use array(apply(A, 2, '*', B), c(dim(B), ncol(A)))
:
QUESTION
On the same computer, using the Rakudo compiler "rakudo-moar-2021.03-01-macos-x86_64-clang.tar.gz" I get 40-100 times speed-ups compared to the timings of my calculations in the original post.
...ANSWER
Answered 2021-Mar-06 at 17:54[EDIT 2021-03-06: thanks to a series of commits over the past ~day (thanks, Liz!), this slowdown is now largely fixed on HEAD
; these performance gains should show up in the next monthly release. I'm leaving the answer below as an example of how to dig into this sort of issue, but the specific problems it diagnosed have largely been resolved.]
Building on @Elizabeth Mattijsen's comment: The slow performance here is mostly due to the Rakudo compiler not properly optimizing the generated code (as of 2021-03-05). As the compiler continues to improve, the (idiomatic) code you wrote above should perform much better.
However, for today we can use a few workarounds to speed up this calculation. While it's still true that Raku's performance won't be particularly competitive with R here, some profiling-driven refactoring can make this code nearly an order of magnitude faster.
Here's how we do it:
First, we start by profiling the code. If you run your script with raku --profile=
, then you'll get a profile written to . By default, this will be an HTML file that allows you to view the profile in your browser. My preference, however, is to specify an
.sql
extension, which generates an SQL profile. I then view this profile with MoarProf, the revised profiler that Timo Paulssen is building.
Looking at this profile shows exactly the issue that Liz mentioned: Calls that should be getting inlined are not. To fix this, let's create our own sorting function, which the JIT compiler will happily optimize:
QUESTION
I'm trying to calculate the rolling mean of the previous k non-NA values within the dplyr/tidyverse framework. I've written a function that seems to work but was wondering if there's already a function from some package (which will probably be much more efficient than my attempt) doing exactly this. An example dataset:
...ANSWER
Answered 2021-Apr-07 at 22:39Since I am not aware of a ready-made way of computing your output in any standard library, I came up with the implementation roll_mean_k_efficient
below, which seems to speed up your computations considerably. Note that this implementation makes use of the rollapply
and the na.locf
methods from the zoo
package.
QUESTION
I'm working on an Rcpp sparse matrix class that uses both Rcpp::IntegerVector
(row/column pointers) and a templated std::vector
. The rationale is that overhead in deep-copying the integer pointer vectors (@i
, @p
) in extremely large sparse-matrices can be avoided by simply leaving them as pointers to R objects, and consistently, microbenchmarks show that this approach takes almost exactly half the time as conversion to Eigen::SparseMatrix
and arma::SpMat
while using less memory.
Bare-bones Rcpp sparse matrix class
...ANSWER
Answered 2021-Apr-14 at 14:38It's actually quite simple to create an Rcpp SparseMatrix class! I was overthinking it.
QUESTION
Let's say I define an S4 class 'foo'
with two slots 'a'
and 'b'
, and define an object x
of class 'foo'
,
ANSWER
Answered 2021-Apr-12 at 03:00Slots are stored as attributes. We have a couple of options for converting a slot to NULL
.
Option 1: You can use the check=FALSE
argument in slot<-
to assign a slot as NULL without triggering an error.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install microbenchmark
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page