jsperf | Run jsperf.com tests locally with your NodeJS version | Runtime Evironment library

 by   OrKoN HTML Version: Current License: MIT

kandi X-RAY | jsperf Summary

kandi X-RAY | jsperf Summary

jsperf is a HTML library typically used in Server, Runtime Evironment, Nodejs, Docker applications. jsperf has no bugs, it has no vulnerabilities, it has a Permissive License and it has low support. You can download it from GitHub.

This command line utility helps you run performance tests from locally with NodeJS.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              jsperf has a low active ecosystem.
              It has 35 star(s) with 0 fork(s). There are 1 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              There are 0 open issues and 2 have been closed. There are no pull requests.
              It has a neutral sentiment in the developer community.
              The latest version of jsperf is current.

            kandi-Quality Quality

              jsperf has no bugs reported.

            kandi-Security Security

              jsperf has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.

            kandi-License License

              jsperf is licensed under the MIT License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              jsperf releases are not available. You will need to build from source code and install.
              Installation instructions are not available. Examples and code snippets are available.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of jsperf
            Get all kandi verified functions for this library.

            jsperf Key Features

            No Key Features are available at this moment for jsperf.

            jsperf Examples and Code Snippets

            No Code Snippets are available at this moment for jsperf.

            Community Discussions

            QUESTION

            Why is array.prototype.slice() so slow on sub-classed arrays?
            Asked 2020-Jun-04 at 11:14

            In node v14.3.0, I discovered (while doing some coding work with very large arrays) that sub-classing an array can cause .slice() to slow down by a factor 20x. While, I could imagine that there might be some compiler optimizations around a non-subclassed array, what I do not understand at all is how .slice() can be more than 2x slower than just manually copying elements from one array to another. That does not make sense to me at all. Anyone have any ideas? Is this a bug or is there some aspect to this that would/could explain it?

            For the test, I created a 100,000,000 unit array filled with increasing numbers. I made a copy of the array with .slice() and I made a copy manually by then iterating over the array and assigning values to a new array. I then ran those two tests for both an Array and my own empty subclass ArraySub. Here are the numbers:

            ...

            ANSWER

            Answered 2020-Jun-04 at 11:14

            V8 developer here. What you're seeing is fairly typical:

            The built-in .slice() function for regular arrays is heavily optimized, taking all sorts of shortcuts and specializations (it even goes as far as using memcpy for arrays containing only numbers, hence copying more than one element at a time using your CPU's vector registers!). That makes it the fastest option.

            Calling Array.prototype.slice on a custom object (like a subclassed array, or just let obj = {length: 100_000_000, foo: "bar", ...}) doesn't fit the restrictions of the fast path, so it's handled by a generic implementation of the .slice builtin, which is much slower, but can handle anything you throw at it. This is not JavaScript code, so it doesn't collect type feedback and can't get optimized dynamically. The upside is that it gives you the same performance every time, no matter what. This performance is not actually bad, it just pales in comparison to the optimizations you get with the alternatives.

            Your own implementation, like all JavaScript functions, gets the benefit of dynamic optimization, so while it naturally can't have any fancy shortcuts built into it right away, it can adapt to the situation at hand (like the type of object it's operating on). That explains why it's faster than the generic builtin, and also why it provides consistent performance in both of your test cases. That said, if your scenario were more complicated, you could probably pollute this function's type feedback to the point where it becomes slower than the generic builtin.

            With the [i, item] of source.entries() approach you're coming close to the spec behavior of .slice() very concisely at the cost of some overhead; a plain old for (let i = 0; i < source.length; i++) {...} loop would be about twice as fast, even if you add an if (i in source) check to reflect .slice()'s "HasElement" check on every iteration.

            More generally: you'll probably see the same general pattern for many other JS builtins -- it's a natural consequence of running on an optimizing engine for a dynamic language. As much as we'd love to just make everything fast, there are two reasons why that won't happen:

            (1) Implementing fast paths comes at a cost: it takes more engineering time to develop (and debug) them; it takes more time to update them when the JS spec changes; it creates an amount of code complexity that quickly becomes unmanageable leading to further development slowdown and/or functionality bugs and/or security bugs; it takes more binary size to ship them to our users and more memory to load such binaries; it takes more CPU time to decide which path to take before any of the actual work can start; etc. Since none of those resources are infinite, we'll always have to choose where to spend them, and where not.

            (2) Speed is fundamentally at odds with flexibility. Fast paths are fast because they get to make restrictive assumptions. Extending fast paths as much as possible so that they apply to as many cases as possible is part of what we do, but it'll always be easy for user code to construct a situation that makes it impossible to take the shortcuts that make a fast path fast.

            Source https://stackoverflow.com/questions/62184501

            QUESTION

            Grouping object entries into object arrays
            Asked 2020-May-29 at 03:47

            How can I create arrays to group members depending on their tag?

            Tags can be anything, these are just examples.

            Example Input

            ...

            ANSWER

            Answered 2020-May-28 at 23:31

            You can use a simple set to group them by tag as follows:

            Source https://stackoverflow.com/questions/62076185

            QUESTION

            Is js native array.flat slow for depth=1?
            Asked 2020-May-22 at 06:29

            This gist is a small benchmark I wrote comparing the performance for 4 alternatives for flattening arrays of depth=1 in JS (the code can be copied as-is into the google console). If I'm not missing anything, the native Array.prototype.flat has the worst performance by far - on the order of 30-50 times slower than any of the alternatives.

            Update: I've created a benchmark on jsperf.

            It should be noted that the 4th implementation in this benchmark is consistently the most performant - often achieving a performance that is 70 times better. The code was tested several times in node v12 and the Chrome console.

            This result is accentuated most in a large subset - see the last 2 arrays tested below. This result is very surprising, given the spec, and the V8 implementation which seems to follow the spec by the letter. My C++ knowledge is non-existent, as is my familiarity with the V8 rabbit hole, but it seems to me that given the recursive definition, once we reach a final depth subarray, no further recursive calls are made for that subarray call (the flag shouldFlatten is false when the decremented depth reaches 0, i.e., the final sub-level) and adding to the flattened result includes iterative looping over each sub-element, and a simple call to this method. Therefore, I cannot see a good reason why a.flat should suffer so much on performance.

            I thought perhaps the fact that in the native flat the result's size isn't pre-allocated might explain the difference. The second implementation in this benchmark, which isn't pre-allocated, shows that this alone cannot explain the difference - it is still 5-10 times more performant than the native flat. What could be the reason for this?

            Implementations tested (order is the same in code, stored in the implementations array - the two I wrote are at the end of code snippet):

            1. My own flattening implementation that includes pre-allocating the final flattened length (thus avoiding size all re-allocations). Excuse the imperative style code, I was going for max performance.
            2. Simplest naive implementation, looping over each sub-array and pushing into the final array. (thus risking many size re-allocations).
            3. Array.prototype.flat (native flat)
            4. [ ].concat(...arr) (=spreading array, then concatenating the results together. This is a popular way of accomplishing a depth=1 flattening).

            Arrays tested (order is the same in code, stored in the benchmarks object):

            1. 1000 subarrays with 10 elements each. (10 thou total)
            2. 10 subarrays with 1000 elements each. (10 thou total)
            3. 10000 subarrays with 1000 elements each. (10 mil total)
            4. 100 subarrays with 100000 elements each. (10 mil total)

            ...

            ANSWER

            Answered 2020-Apr-24 at 20:16

            (V8 developer here.)

            The key point is that the implementation of Array.prototype.flat that you found is not at all optimized. As you observe, it follows the spec almost to the letter -- that's how you get an implementation that's correct but slow. (Actually the verdict on performance is not that simple: there are advantages to this implementation technique, like reliable performance from the first invocation, regardless of type feedback.)

            Optimizing means adding additional fast paths that take various shortcuts when possible. That work hasn't been done yet for .flat(). It has been done for .concat(), for which V8 has a highly complex, super optimized implementation, which is why that approach is so stunningly fast.

            The two handwritten methods you provided get to make assumptions that the generic .flat() has to check for (they know that they're iterating over arrays, they know that all elements are present, they know that the depth is 1), so they need to perform significantly fewer checks. Being JavaScript, they also (eventually) benefit from V8's optimizing compiler. (The fact that they get optimized after some time is part of the explanation why their performance appears somewhat variable at first; in a more robust test you could actually observe that effect quite clearly.)

            All that said, in most real applications you probably won't notice a difference in practice: most applications don't spend their time flattening arrays with millions of elements, and for small arrays (tens, hundreds, or thousands of elements) the differences are below the noise level.

            Source https://stackoverflow.com/questions/61411776

            QUESTION

            Is typeof faster than literal comparison?
            Asked 2020-May-14 at 10:54

            When checking if a value x is a boolean, is

            typeof x === 'boolean' faster than x === true || x === false or vice-versa?

            I expected that literal comparison would be faster, but it seems like the typeof comparison is almost twice as fast.

            Sidenote: I know this doesn't matter for almost any practical purpose.

            Here is the benchmark code (disclaimer: I don't know how to benchmark): https://jsperf.com/check-if-boolean-123

            ...

            ANSWER

            Answered 2020-May-14 at 10:54

            It depends.

            To give an absolute answer, we would have to compile every piece of code / watch the interpreter on every possible browser / architecture combination. Then we could give an absolute answer which operation takes fewer processor cycles, everything else is pure speculation. And that's what I'm doing now:

            The naive approach

            Let's assume that engines do not perform any optimizations. That they just perform every step as defined in the specification. Then for every testcase the following happens:

            typeof x === 'boolean'

            (1) The type of x gets looked up. As the engine probably represents a generic "JavaScript Value" as a structure with a pointer to the actual data and an enum for the type of the value, getting a string describing the type is probably a lookup in a type -> type string map.

            (2) Now we have two string values, that we have to compare ('boolean' === 'boolean'). First of all, it has to be checked that the types equal. That is probably done by comparing the type field of both values, and a bitwise equality (meaning: one processor op).

            (3) Finally the value has to be compared for equality. For strings this means iterating over both strings and comparing the characters to each other.

            x === true || x === false

            (1) First of all, the types of x and true and x ad false have to be compared as described above.

            (2) Secondly the values have to be compared, for booleans that's bitwise quality (meaning: one processor op).

            (3) The last step is the or expression. Given two values, they first have to be checked for truthiness (which is quite easy for booleans, but still we have to check again that these are really booleans), then the or operation can be done (bitwise or, meaning: one processor op).

            So which one is faster? If I had to guess, the second approach, as the first one has to do string equality, which probably takes a few iterations more.

            The optimal approach

            A very clever compiler might realize that typeof x === 'boolean' is only true if the type of x is boolean. SO it could be optimized to (C++ pseudocode):

            Source https://stackoverflow.com/questions/61786250

            QUESTION

            What is faster: try catch vs Promise
            Asked 2020-Apr-16 at 08:55

            I heard such an opinion that you should avoid usage of try/catch at all because it takes many resources. So could the promise error handling to be faster? Or it does not matter at all?

            ...

            ANSWER

            Answered 2017-Mar-07 at 13:04

            try/catch idiom works very well when you have fully synchronous code, but asynchronous operations render it useless, no errors will be caught. i.e., the function will begin its course while the outer stack runs through and gets to the last line without any errors. If an error occurs at some point in the future inside asynchronous function – nothing will be caught.

            When we use Promises, “we’ve lost our error handling”, you might say. That’s right, we don’t need to do anything special here to propagate error because we return a promise and there’s built in support for error flow.

            Source https://stackoverflow.com/questions/42648956

            QUESTION

            Performance of declaring function withing a scope of a function vs outside of it
            Asked 2020-Feb-24 at 22:43

            I was pondering on performance implications on whether or not to declare a function within a function scope vs outside of the scope.

            To do that, I created a test using jsperf and the results were interesting to me and I'm hoping if someone can explain what is going on here.

            Test: https://jsperf.com/saarman-fn-scope/1

            Google Chrome results:

            Microsoft Edge results:

            Firefox results:

            ...

            ANSWER

            Answered 2020-Feb-22 at 23:21

            I believe what's happening in the Chrome and Firefox case is that it's inlining the mathAdd function. Because it's a simple function with no side effects that is both created and called within the function, the compiler replaces the call site with the internal code of the function.

            The resulting code will look like this:

            Source https://stackoverflow.com/questions/60357206

            QUESTION

            Unexpected slowdown in Chrome on high-iteration count
            Asked 2020-Feb-08 at 09:35

            I was comparing two functions' performance in Chrome's console, when I stumbled across something I can't explain.

            ...

            ANSWER

            Answered 2020-Feb-06 at 10:51

            V8 developer here. As wOxxOm points out, this is mostly an illustration of the pitfalls of microbenchmarking.

            First off:

            Unexpected de-optimisation...

            Nope, deoptimization is not the issue here (it's a very specific term with a very specific meaning). You mean "slowdown".

            ...on high-iteration count

            Nope, high iteration count is also not the issue here. While you could say that the lack of warmup in your test is contributing to the results you see, that contribution is the part you found less surprising.

            One mechanism to be aware of is "on-stack replacement": functions with (very) long-running loops will get optimized while the loop is executing. It doesn't make sense to do that on a background thread, so execution is interrupted while optimization happens on the main thread. If the code after the loop hasn't been executed yet and therefore doesn't have type feedback, then the optimized code will be thrown away ("deoptimized") once execution reaches the end of the loop, to collect type feedback while executing unoptimized bytecode. In case of another long-running loop like in the example here, the same OSR-then-deopt dance will be repeated. That means some nontrivial fraction of what you're measuring is optimization time. This explains most of the variance you see in runRawRandom before times stabilize.

            Another effect to be aware of is inlining. The smaller and faster the function in question, the bigger the call overhead, which is avoided when you write the benchmark such that the function can get inlined. Additionally, inlining often unlocks additional optimization opportunities: in this case, the compiler can see after inlining that the comparison result is never used, so it dead-code-eliminates all the comparisons. This explains why runRandomLooped is so much slower than runRawRandom: the latter benchmarks empty loops. The former's very first iteration is "fast" (=empty) because V8 at that point inlines mathCompare for the f(...) call in jsPerfRandom (because that's the only function it's ever seen there), but soon after it realizes "whoops, various different functions are getting called here", so it deopts and won't try to inline again on subsequent optimization attempts.

            If you care about the details, you can use some combination of the flags --trace-opt --trace-deopt --trace-osr --trace-turbo-inlining --print-opt-code --code-comments to investigate the behavior in depth. Be warned though that while this exercise is likely going to cost you significant time, what you can learn from the behavior of a microbenchmark is, in all likelihood, not going to be relevant for real-world use cases.

            To illustrate:

            • you have one microbenchmark here that proves beyond doubt that mathCompare is much slower than logicCompare
            • you have another microbenchmark that proves beyond doubt that both have the same performance
            • your aggregate observations prove beyond doubt that performance goes down when V8 decides to optimize things

            In practice though, all three observations are false (which is not overly surprising given that two of them are direct contradictions of each other):

            • the "fast results" are not reflective of real-world behavior because they could dead-code eliminate the workload you were trying to measure
            • the "slow results" are not reflective of real-world behavior because the specific way the benchmark was written prevented inlining of a tiny function (which would virtually always get inlined in real code)
            • the supposed "performance going down" was just a microbenchmarking artifact, and was not at all due to failed/useless optimizations, and would not occur in a real-world situation.

            Source https://stackoverflow.com/questions/60081883

            QUESTION

            Check if a number has repeated digits
            Asked 2020-Feb-05 at 07:51

            How to check if a certain number has any repeated digits (anywhere within it)?

            Given numbers like:

            ...

            ANSWER

            Answered 2017-Feb-28 at 09:30

            QUESTION

            Which JS benchmark site is correct?
            Asked 2020-Jan-08 at 10:20

            I created a benchmark on both jsperf.com and jsben.ch, however, they're giving substantially different results.

            JSPerf: https://jsperf.com/join-vs-template-venryx
            JSBench: http://jsben.ch/9DaxR

            Note that the code blocks are exactly the same.

            On jsperf, block 1 is "61% slower" than the fastest:

            On jsbench, block 1 is only 32% slower than the fastest: ((99 - 75) / 75)

            What gives? I would expect benchmark sites to give the same results, at least within a few percent.

            As it stands, I'm unable to make a conclusion on which option is fastest because of the inconsistency.

            EDIT

            Extended list of benchmarks:

            Not sure which is the best, but I'd skip jsben.ch (the last one) for the reasons Job mentions: it doesn't display the number of runs, the error margin, or the number of operations per second -- which is important for estimating absolute performance impact, and enabling stable comparison between benchmark sites and/or browsers and browser versions.

            (At the moment http://jsbench.me is my favorite.)

            ...

            ANSWER

            Answered 2019-Mar-25 at 14:54

            March 2019 Update: results are inconsistent between Firefox and Chrome - perf.zone behave anomalously on Chrome, jsben.ch behaves anomalously on Firefox. Until we know exactly why the best you can do is benchmark on multiple websites (but I'd still skip jsben.ch, the others give you a least some error margin and stats on how many runs were taken, and so on)

            TL;DR: running your code on perf.zone and on jsbench.github.io (see here and here), the results closely match jsperf. Personally, and for other reasons than just these results, I trust these three websites more than jsben.ch.

            Recently, I tried benchmarking the performance of string concatenation too, but in my case it's building one string out of 1000000+ single character strings (join('') wins for numbers this large and up, btw). On my machine the jsben.ch timed out instead of giving a result at all. Perhaps it works better on yours, but for me that's a big warning sign:

            http://jsben.ch/mYaJk

            http://jsbench.github.io/#26d1f3705b3340ace36cbad7b24055fb

            https://run.perf.zone/view/join-vs-concat-when-dealing-with-very-long-lists-of-single-character-strings-1512490506658

            (I can't be bothered to ever have to deal with jsperf's not all tests inserted again, sorry)

            At the moment I suspect but can't prove that perf.zone has slightly more reliable benchmark numbers:

            • when optimising lz-string I used jsbench.github.io for a very long time, but at some point I noticed there were impossibly large error margins for certain types of code, over 100%.

            • running benchmarks on mobile is fine with jsperf.com and perf.zone, but jsbench.github.io is kinda janky and the CSS breaks while running tests.

            Perhaps these two things are related: perhaps the method that jsbench.github.io uses to update the DOM introduces some kind of overhead that affects the benchmarks (they should meta-benchmark that...).

            Note: perf.zone is not without its flaws. It sometimes times out when trying to save a benchmark (the worst time to do so...) and you can only fork your own code, not edit it. But the output still seems to be more in line with jsperf, and it has a really nice "quick" mode for throwaway benchmarking

            Source https://stackoverflow.com/questions/44612083

            QUESTION

            Transpile forEach, map, filter and for of to length based for loop in JavaScript
            Asked 2020-Jan-01 at 15:35

            I have a WebGL project where every micro optimization matters. For readibility I use .forEach, .map, .filter and for of loops but I would like to transpile them to simple (reversed) length based for loops in my production code due to performance reasons. Could you please advise how to achieve that using Webpack? I am curious about best practices in this topic.

            ...

            ANSWER

            Answered 2020-Jan-01 at 15:35

            As it turned out, there are two Babel plugins available that do the job quite well.

            In my configuration, I use Webpack and ts-loader to transpile TypeScript. To use these Babel plugins while keeping the ts-loader I chain the two loaders, thus Babel plugins will optimize the js code that comes out of ts-loader.

            webpack.config.js:

            Source https://stackoverflow.com/questions/59388617

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install jsperf

            You can download it from GitHub.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/OrKoN/jsperf.git

          • CLI

            gh repo clone OrKoN/jsperf

          • sshUrl

            git@github.com:OrKoN/jsperf.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link