heap | 🔼 MinHeap/MaxHeap & Heap w/ custom comparator | Natural Language Processing library

 by   datastructures-js JavaScript Version: v4.2.2 License: MIT

kandi X-RAY | heap Summary

kandi X-RAY | heap Summary

heap is a JavaScript library typically used in Artificial Intelligence, Natural Language Processing, Example Codes applications. heap has no bugs, it has no vulnerabilities, it has a Permissive License and it has low support. You can install using 'npm i @datastructures-js/heap' or download it from GitHub, npm.

A javascript implementation for Heap data structure. Heap base class allows creating heaps using a custom comparator, and MinHeap/MaxHeap classes extend it for use cases that do not require complex comparison like primitive values and known comparison object prop.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              heap has a low active ecosystem.
              It has 54 star(s) with 13 fork(s). There are 3 watchers for this library.
              OutlinedDot
              It had no major release in the last 12 months.
              There are 1 open issues and 8 have been closed. On average issues are closed in 9 days. There are no pull requests.
              It has a neutral sentiment in the developer community.
              The latest version of heap is v4.2.2

            kandi-Quality Quality

              heap has 0 bugs and 0 code smells.

            kandi-Security Security

              heap has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              heap code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              heap is licensed under the MIT License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              heap releases are available to install and integrate.
              Deployable package is available in npm.
              Installation instructions are not available. Examples and code snippets are available.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of heap
            Get all kandi verified functions for this library.

            heap Key Features

            No Key Features are available at this moment for heap.

            heap Examples and Code Snippets

            No Code Snippets are available at this moment for heap.

            Community Discussions

            QUESTION

            Why does the implementation of std::any use a function pointer + function op codes, instead of a pointer to a virtual table + virtual calls?
            Asked 2022-Apr-08 at 15:31

            Both the GCC and LLVM implementations of std::any store a function pointer in the any object and call that function with an Op/Action argument to perform different operations. Here is an example of that function from LLVM:

            ...

            ANSWER

            Answered 2022-Apr-08 at 15:31

            Consider a typical use case of a std::any: You pass it around in your code, move it dozens of times, store it in a data structure and fetch it again later. In particular, you'll likely return it from functions a lot.

            As it is now, the pointer to the single "do everything" function is stored right next to the data in the any. Given that it's a fairly small type (16 bytes on GCC x86-64), any fits into a pair of registers. Now, if you return an any from a function, the pointer to the "do everything" function of the any is already in a register or on the stack! You can just jump directly to it without having to fetch anything from memory. Most likely, you didn't even have to touch memory at all: You know what type is in the any at the point you construct it, so the function pointer value is just a constant that's loaded into the appropriate register. Later, you use the value of that register as your jump target. This means there's no chance for misprediction of the jump because there is nothing to predict, the value is right there for the CPU to consume.

            In other words: The reason that you get the jump target for free with this implementation is that the CPU must have already touched the any in some way to obtain it in the first place, meaning that it already knows the jump target and can jump to it with no additional delay.

            That means there really is no indirection to speak of with the current implementation if the any is already "hot", which it will be most of the time, especially if it's used as a return value.

            On the other hand, if you use a table of function pointers somewhere in a read-only section (and let the any instance point to that instead), you'll have to go to memory (or cache) every single time you want to move or access it. The size of an any is still 16 bytes in this case but fetching values from memory is much, much slower than accessing a value in a register, especially if it's not in a cache. In a lot of cases, moving an any is as simple as copying its 16 bytes from one location to another, followed by zeroing out the original instance. This is pretty much free on any modern CPU. However, if you go the pointer table route, you'll have to fetch from memory every time, wait for the reads to complete, and then do the indirect call. Now consider that you'll often have to do a sequence of calls on the any (i.e. move, then destruct) and this will quickly add up. The problem is that you don't just get the address of the function you want to jump to for free every time you touch the any, the CPU has to fetch it explicitly. Indirect jumps to a value read from memory are quite expensive since the CPU can only retire the jump operation once the entire memory operation has finished. That doesn't just include fetching a value (which is potentially quite fast because of caches) but also address generation, store forwarding buffer lookup, TLB lookup, access validation, and potentially even page table walks. So even if the jump address is computed quickly, the jump won't retire for quite a long while. In general, "indirect-jump-to-address-from-memory" operations are among the worst things that can happen to a CPU's pipeline.

            TL;DR: As it is now, returning an any doesn't stall the CPU's pipeline (the jump target is already available in a register so the jump can retire pretty much immediately). With a table-based solution, returning an any will stall the pipeline twice: Once to fetch the address of the move function, then another time to fetch the destructor. This delays retirement of the jump quite a bit since it'll have to wait not only for the memory value but also for the TLB and access permission checks.

            Code memory accesses, on the other hand, aren't affected by this since the code is kept in microcode form anyway (in the µOp cache). Fetching and executing a few conditional branches in that switch statement is therefore quite fast (and even more so when the branch predictor gets things right, which it almost always does).

            Source https://stackoverflow.com/questions/71797625

            QUESTION

            Java PriorityQueue: how to heapify a Collection with a custom Comparator?
            Asked 2022-Mar-10 at 03:24

            For example, given a List of Integer List list = Arrays.asList(5,4,5,2,2), how can I get a maxHeap from this List in O(n) time complexity?

            The naive method:

            ...

            ANSWER

            Answered 2021-Aug-28 at 15:09
            If you don't mind some hack

            According to the java doc of PriorityQueue(PriorityQueue)

            Creates a PriorityQueue containing the elements in the specified priority queue. This priority queue will be ordered according to the same ordering as the given priority queue.

            So we can extend PriorityQueue as CustomComparatorPriorityQueue to hold the desired comparator and the Collection we need to heapify. Then call new PriorityQueue(PriorityQueue) with an instance of CustomComparatorPriorityQueue.

            Below is tested to work in Java 15.

            Source https://stackoverflow.com/questions/68960310

            QUESTION

            Typing a closure that returns an anonymous type borrowing from one of its inputs without heap allocation or trait objects
            Asked 2022-Feb-08 at 17:56

            Let's say that I have the following working code:

            ...

            ANSWER

            Answered 2022-Feb-08 at 17:56
            TL;DR

            No, not until closure HRTB inference is fixed. Current workarounds include using function pointers instead or implementing a helper trait on custom structs -- the helper trait is needed regardless of approach until higher-kinded types are introduced in Rust.

            Playground

            Details

            To avoid returning a Box, you would need the type parameter I to be generic over the lifetime 'a, so that you can use it with any lifetime (in a for<'a> bound, for example). Unfortunately, as discussed in a similar question, Rust does not yet support higher-kinded types (type parameters that are themselves generic over other type parameters), so we must use a helper trait:

            Source https://stackoverflow.com/questions/71031932

            QUESTION

            What is 'serviceability memory category' of Native Memory Tracking?
            Asked 2022-Jan-17 at 13:38

            I have an java app (JDK13) running in a docker container. Recently I moved the app to JDK17 (OpenJDK17) and found a gradual increase of memory usage by docker container.

            During investigation I found that the 'serviceability memory category' NMT grows constantly (15mb per an hour). I checked the page https://docs.oracle.com/en/java/javase/17/troubleshoot/diagnostic-tools.html#GUID-5EF7BB07-C903-4EBD-A9C2-EC0E44048D37 but this category is not mentioned there.

            Could anyone explain what this serviceability category means and what can cause such gradual increase? Also there are some additional new memory categories comparing to JDK13. Maybe someone knows where I can read details about them.

            Here is the result of command jcmd 1 VM.native_memory summary

            ...

            ANSWER

            Answered 2022-Jan-17 at 13:38

            Unfortunately (?), the easiest way to know for sure what those categories map to is to look at OpenJDK source code. The NMT tag you are looking for is mtServiceability. This would show that "serviceability" are basically diagnostic interfaces in JDK/JVM: JVMTI, heap dumps, etc.

            But the same kind of thing is clear from observing that stack trace sample you are showing mentions ThreadStackTrace::dump_stack_at_safepoint -- that is something that dumps the thread information, for example for jstack, heap dump, etc. If you have a suspicion for the memory leak in that code, you might try to build a MCVE demonstrating it, and submitting the bug against OpenJDK, or showing it to a fellow OpenJDK developer. You probably know better what your application is doing to cause thread dumps, focus there.

            That being said, I don't see any obvious memory leaks in StackFrameInfo, neither can I reproduce any leak with stress tests, so maybe what you are seeing is "just" thread dumping over the larger and larger thread stacks. Or you capture it when thread dump is happening. Or... It is hard to say without the MCVE.

            Update: After playing with MCVE, I realized that it reproduces with 17.0.1, but not with either mainline development JDK, or JDK 18 EA, or JDK 17.0.2 EA. I tested with 17.0.2 EA before, so was not seeing it, dang. Bisection between 17.0.1 and 17.0.2 EA shows it was fixed with JDK-8273902 backport. 17.0.2 releases this week, so the bug should disappear after you upgrade.

            Source https://stackoverflow.com/questions/70709971

            QUESTION

            JavaScript: V8 question: are small integers pooled?
            Asked 2022-Jan-17 at 12:37

            was looking at this V8 design doc where it has a section for Constant Pool Entries

            it says

            Constant pools are used to store heap objects and small integers that are referenced as constants in generated bytecode. and

            ... Small integers and the strong referenced oddball type’s have bytecodes to load them directly and do not go into the constant pool.

            So I am confused: are small integers pooled or not?

            My understanding is that it is not worth it pooling small integers if sizeof(int) < sizeof(int *) - because it is cheaper to just copy the actual integer instead of copying the pointer that points to the integer in the constant pool. Also variables that hold integers can be optimised to be stored directly in CPU registers and skip being allocated in memory first.

            Also, are they located on the V8 heap or the stack? My understanding had always been that smis are just be the immediate values allocated on the stack instead of being a pointer + an integer allocated on heap. Also if you take a heap snapshot using chrome devtool you cannot find smis in the heap snapshot - only heap number such as big integers or double like 3.14 are on the heap until I saw this article https://v8.dev/blog/pointer-compression#value-tagging-in-v8

            JavaScript values in V8 are represented as objects and allocated on the V8 heap, no matter if they are objects, arrays, numbers or strings. This allows us to represent any value as a pointer to an object.

            Now I am just baffled - are smis also allocated on the heap?

            ...

            ANSWER

            Answered 2022-Jan-17 at 12:37

            V8 developer here.

            are small integers pooled or not?

            They are not (at least not right now). That said, this is a small implementation detail and could be done either way: it would totally be possible to use the constant pool for Smis. I suppose the decision to build special machinery for Smis (instead of reusing the general-purpose constant pool) was made because things turned out to be more efficient that way.

            it is not worth it pooling small integers if sizeof(int) < sizeof(int *)

            The details are different (a Smi is not an int, and constant pool slots are referenced by index rather than C++ pointer), but this reasoning does go in the right direction: avoiding indirections can save time and memory.

            are smis also allocated on the heap?

            Yes, everything is allocated on the heap. The stack is only useful for temporary (and sufficiently small) things; that's largely unrelated to the type of thing.

            The "trick" of Smis is that they're not stored as separate objects: when you have an object that refers to a Smi, such as let foo = {smi: 42}, then the value 42 can be smi-encoded and stored directly inside the "foo" object (whereas if the value was 42.5, then the object would store a pointer to a separate "HeapNumber"). But since the object is on the heap, so is the Smi.

            @DanielCruz

            What I understand [...] is that constant small integers are pooled. Variable small integers are not.

            Nope. Any literal that occurs in source code is "constant". Whether you use let or const for your variables has nothing to do with this.

            Source https://stackoverflow.com/questions/70734678

            QUESTION

            Why is it allowed for the C++ compiler to opmimize out memory allocations with side effects?
            Asked 2022-Jan-09 at 20:34

            Another question discusses the legitimacy for the optimizer to remove calls to new: Is the compiler allowed to optimize out heap memory allocations?. I have read the question, the answers, and N3664.

            From my understanding, the compiler is allowed to remove or merge dynamic allocations under the "as-if" rule, i.e. if the resulting program behaves as if no change was made, with respect to the abstract machine defined in the standard.

            I tested compiling the following two-files program with both clang++ and g++, and -O1 optimizations, and I don't understand how it is allowed to to remove the allocations.

            ...

            ANSWER

            Answered 2022-Jan-09 at 20:34

            Allocation elision is an optimization that is outside of and in addition to the as-if rule. Another optimization with the same properties is copy elision (not to be confused with mandatory elision, since C++17): Is it legal to elide a non-trivial copy/move constructor in initialization?.

            Source https://stackoverflow.com/questions/70645211

            QUESTION

            Time complexity for Dijkstra's algorithm with min heap and optimizations
            Asked 2022-Jan-04 at 00:18

            What is the time complexity of this particular implementation of Dijkstra's algorithm?

            I know several answers to this question say O(E log V) when you use a min heap, and so does this article and this article. However, the article here says O(V+ElogE) and it has similar (but not exactly the same) logic as the code below.

            Different implementations of the algorithm can change the time complexity. I'm trying to analyze the complexity of the implementation below, but the optimizations like checking visitedSet and ignoring repeated vertices in minHeap is making me doubt myself.

            Here is the pseudo code:

            ...

            ANSWER

            Answered 2021-Dec-22 at 00:38
            1. Despite the test, this implementation of Dijkstra may put Ω(E) items in the priority queue. This will cost Ω(E log E) with every comparison-based priority queue.

            2. Why not E log V? Well, assuming a connected, simple, nontrivial graph, we have Θ(E log V) = Θ(E log E) since log (V−1) ≤ log E < log V² = 2 log V.

            3. The O(E + V log V)-time implementations of Dijkstra's algorithm depend on a(n amortized) constant-time DecreaseKey operation, avoiding multiple entries for an individual vertex. The implementation in this question will likely be faster in practice on sparse graphs, however.

            Source https://stackoverflow.com/questions/70431085

            QUESTION

            Negative productivity in Haskell's runtime statistics
            Asked 2021-Nov-24 at 16:03

            I'm using a program coded in Haskell to which I passed +RTS -N3 -M9G -s -RTS in order to obtain runtime statistics at the end of the execution. I've occasionally had a result where the productivity is negative. Also, the program ran its task successfully but MUT is zero.

            1. How come productivity is negative?
            2. How is it possible for MUT to be zero if the program is completed successfully?
            ...

            ANSWER

            Answered 2021-Nov-19 at 18:31

            There appears to be something very wrong with the calculated GC CPU time. It's 41010 secs compared to 2737 sec elapsed, which doesn't make sense if you're only running on three capabilities.

            This miscalculation means that the calculated MUT CPU time, which is just total CPU time minus INIT, GC, and EXIT time, is actually a large negative number (5073-41010-2 = -35939). This gives a productivity of -35939/5073=-708%. When the MUT seconds are displayed, negative numbers are truncated at zero, to avoid reporting small negative numbers when MUT is very low and there's a clock precision error, which is why the displayed MUT time is 0 instead of -35939.

            I don't know why the GC time is so badly miscalculated. My best guess is this. If you're running on Windows, there are known issues with CPU time clock precision, and it's possible that certain unusual patterns of garbage collection timing might result in precision errors occuring in only one direction, slightly overestimating the actual GC time more often than it underestimates it. Over 2.4 million collections (see your GC stats), this difference could accumulate to a huge positive error.

            I looked through GitLab issues, and except for the report on general Windows CPU time imprecision and a couple of probably unrelated negative MUT reports here and here, I didn't see anything helpful.

            Source https://stackoverflow.com/questions/70021611

            QUESTION

            Chrome 94 DevTools crashes when doing memory profiling
            Asked 2021-Oct-25 at 01:45

            I was trying to fix a memory leak issue but when I take heap snapshot or check real time allocation in Chrome DevTools, the page crashes with a "Aw, Snap!" message with error code STATUS_ACCESS_VIOLATION.

            I am using Chrome Version 94.0.4606.61 (Official Build) (64-bit).

            ...

            ANSWER

            Answered 2021-Oct-25 at 01:45

            I found this bug too. Workaround is to use incognito window for degugging.

            update: Version 95.0.4638.54 has fixed this issue. But in case similar issue come out again, you can always try to avoid it in incognito window.

            Source https://stackoverflow.com/questions/69387388

            QUESTION

            Java heap memory allocation limits
            Asked 2021-Oct-02 at 07:51

            I've written the following test to check maximum available heap memory:

            ...

            ANSWER

            Answered 2021-Oct-02 at 07:51

            I believe what happens is that the memory manager tries to align the chunks at the next available 1MB boundary. But as the 1MB arrays actually take up slightly more than 1MB (for storing length and something else), they get arranged with a gap of almost 1MB between them. When reducing the block size by 16 bytes, they suddenly use up the whole memory again.

            Source https://stackoverflow.com/questions/69413967

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install heap

            You can install using 'npm i @datastructures-js/heap' or download it from GitHub, npm.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries

            Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link

            Consider Popular Natural Language Processing Libraries

            transformers

            by huggingface

            funNLP

            by fighting41love

            bert

            by google-research

            jieba

            by fxsjy

            Python

            by geekcomputers

            Try Top Libraries by datastructures-js

            priority-queue

            by datastructures-jsJavaScript

            queue

            by datastructures-jsJavaScript

            binary-search-tree

            by datastructures-jsJavaScript

            graph

            by datastructures-jsJavaScript

            linked-list

            by datastructures-jsJavaScript