max-heap | Max Heap is a data structure
kandi X-RAY | max-heap Summary
kandi X-RAY | max-heap Summary
Max Heap
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of max-heap
max-heap Key Features
max-heap Examples and Code Snippets
public static void addNewNumber(int randomNumber) {
/* Note: addNewNumber maintains a condition that maxHeap.size() >= minHeap.size() */
if (maxHeap.size() == minHeap.size()) {
if ((minHeap.peek() != null) &&
randomNumber >
public static void printMinHeapAndMaxHeap(){
Integer[] minHeapArray = minHeap.toArray(
new Integer[minHeap.size()]);
Integer[] maxHeapArray = maxHeap.toArray(
new Integer[maxHeap.size()]);
Arrays.sort(minHeapArray, maxHeapComparator);
void add(int num) {
if (!minHeap.isEmpty() && num < minHeap.peek()) {
maxHeap.offer(num);
if (maxHeap.size() > minHeap.size() + 1) {
minHeap.offer(maxHeap.poll());
}
}
Community Discussions
Trending Discussions on max-heap
QUESTION
I am experiencing an issue on my local system, which I cannot figure out to solve. I am using Bazel and it looks like the Bazel server process is being blocked by some Mac OS security setting from opening ports.
...ANSWER
Answered 2021-Nov-10 at 21:08As embarrassing as it is, restarting the Mac followed by an sudo rm -rf /private/var/tmp/_bazel_me
(just to be sure) fixed it.
QUESTION
I've embedded V8 9.5 into my app (C++ HTTP server). When I started to use optional chaining in my JS scripts I've noticed abnormal rise in memory consumption under heavy load (CPU) leading to OOM. While there's some free CPU, memory usage is normal. I've displayed V8 HeapStats in grafana (this is only for 1 isolate, which I have 8 in my app)
Under heavy load there's a spike in peak_malloced_memory
, while other stats are much less affected and seem normal.
I've passed --expose-gc
flag to V8 and called gc()
at the end of my script. It completely solved the problem and peak_malloced_memory
doesn't rise like that. Also, by
repeatedly calling gc()
I could free all extra memory consumed without it. --gc-global
also works. But these approaches seem more like a workaround rather than a production-ready solution.
--max-heap-size=64
and --max-old-space-size=64
had no effect - memory consumption still did greatly exceed 8(number of isolates in my app)*64Mb (>2Gb physical RAM).
I don't use any GC-related V8 API in my app.
My app creates v8::Isolate
and v8::Context
once and uses them to process HTTP requests.
Same behavior at v9.7.
Ubuntu xenial
Built V8 with these args.gn
...ANSWER
Answered 2021-Nov-29 at 17:46(V8 developer here.)
- Is it normal that peak_malloced_memory can exceed total_heap_size?
Malloced memory is unrelated to the heap, so yes, when the heap is tiny then malloced memory (which typically also isn't a lot) may well exceed it, maybe only briefly. Note that peak malloced memory (53 MiB in your screenshot) is not current malloced memory (24 KiB in your screenshot); it's the largest amount that was used at any point in the past, but has since been freed (and is hence not a leak, and won't cause an OOM over time).
Not being part of the heap, malloced memory isn't affected by --max-heap-size
or --max-old-space-size
, nor by manual gc()
calls.
- Why could this occur only when using JS's optional chaining?
That makes no sense, and I bet that something else is going on.
- Are there any other, more correct solutions to this problem other than forcing full GC all the time?
I'm not sure what "this problem" is. A brief peak of malloced memory (which is freed again soon) should be fine. Your question title mentions a "leak", but I don't see any evidence of a leak. Your question also mentions OOM, but the graph doesn't show anything related (less than 10 MiB current memory consumption at the end of the plotted time window, with 2GB physical memory), so I'm not sure what to make of that.
Manually forcing GC runs is certainly not a good idea. The fact that it even affects (non-GC'ed!) malloced memory at all is surprising, but may have a perfectly mundane explanation. For example (and I'm wildly speculating here, since you haven't provided a repro case or other more specific data), it could be that the short-term peak is caused by an optimized compilation, and with the forced GC runs you're destroying so much type feedback that the optimized compilation never happens.
Happy to take a closer look if you provide more data, such as a repro case. If the only "problem" you see is that peak_malloced_memory
is larger than the heap size, then the solution is simply not to worry about it.
QUESTION
I wrote this insertion function for a max-heap:
...ANSWER
Answered 2021-Nov-26 at 13:25There are two problems:
After
b
has been set toFalse
the swap is still executed before exiting the loop. Execution should break out of the loop withbreak
. This also makes the booleanb
unnecessary.The condition
int(l/2) < 2
should beint(l/2) < 1
, as really the parent at index 1 should be compared with.
Other remarks:
Make use of the integer division operator instead of flooring the floating point division.
When you slice until the end, using
1:len(arryhp)
, you can omit the part after the colon:1:
.It is not nice that the returned list is not the mutated list that was given as argument. Either the input list should not be mutated, or the list that is returned is the mutated input list. The latter can be done by popping the first value out of the list with
.pop(0)
.
Here is the code with those changes:
QUESTION
Consider the following operations
Build(A[1 . . . n]): Initialises the data structure with the elements of the (possibly unsorted)array A with duplicates. It runs in time O(n).
Insert(x): Inserts the element x into the data structure. It runs in runs in time O(log n).
Median: Returns the median1 of the currently stored elements. It runs in time O(1).
How can i describe a data structure i.e provide an invariant that i will maintain in this data structure? How can i write the pseudocode for Build(), Insert() and Median()?
UPDATE
Build max-heap/min-heap:
ANSWER
Answered 2021-Oct-13 at 23:24Build: Use median-of-medians to find the initial median in O(n), use it to partition the values in half. The half with the smaller values goes into a max-heap, the half with the larger values goes into a min-heap, build each in time O(n). We'll keep the two heaps equally large or differing by at most one element.
Median: The root of the bigger heap, or the mean of the two roots if the heaps have equal size.
Insert: Insert into the bigger heap, then pop its root and insert it into the smaller heap.
QUESTION
Question: Why do we want the loop index i
in line 2 of BUILD-MAX-HEAP to decrease from ⌊length[A]/2⌋
to 1
rather than increase from 1
to ⌊length[A]/2⌋
?
Algorithms: (Courtesy Introduction to Algorithms book):
I tried to show this using drawings as follows.
Approach 1: if we apply build heap the opposite way from 1
to ⌊length[A]/2⌋
of array A = <5,3,17,10,84>
, we would have:
Approach 2: if we apply build heap the opposite way from ⌊length[A]/2⌋
and decrease to 1
of array A = <5,3,17,10,84>
, we would have:
Problem: I see that in both cases the heap property that parent is larger than its children is maintained, so I don't see why a solution says that there would be a problem such that, "we won't be allowed to call MAX-HEAPIFY, since it will fail the condition of having the subtrees be max-heaps. That is, if we start with 1
, there is no guarantee that A[2]
and A[3]
are roots of max-heaps."
ANSWER
Answered 2021-Aug-25 at 14:17The thing is that you can only rely on MAX_HEAPIFY
to do its job right, when the subtree that is rooted at i obeys the heap property everywhere except possibly for the root value (at i) itself, which may need to sift down. The job of MAX_HEAPIFY
is only to move the value of the root to its right position. It cannot fix any other violations of the heap property. If however, it is guaranteed that the rest of the tree below i is obeying the heap property, then you can be sure that the subtree at i will be a heap after MAX_HEAPIFY
has run.
So if you would start with top node, then who knows what you will get... the rest of the tree is not expected to obey the heap property, so MAX_HEAPIFY
will not (necessarily) deliver a heap. And it doesn't help to continue the work in a top-down fashion.
If we take the example tree and perform the forward loop alternative, then we start with a call of MAX_HEAPIFY(1)
on this tree:
QUESTION
Question: Where in a max-heap might the smallest element reside, assuming that all elements are distinct?
I understand that in a max-heap, the largest node is the root and the smallest is one of the leaves. I found answer which says that it's in any of the any of the leaves, that is, elements with index ⌊n/2⌋+k
, where k>=1
, that is, in the second half of the heap array.
Problem: can you please explain why the answer I found do not just say it's one of the leaves? Can you please explain why answer brough ⌊n/2⌋+k
? Second, why in the second half, when it's in the last level of the tree given that all parents are greater than their children, so a child at height 1 is smaller than parent but larger than its child and so on.
Edited: Can you please explain why the indices of the leaves are ⌊n/2⌋+1, ⌊n/2⌋+2, ..., ⌊n/2⌋+n
? Or why the index of the last non-leaf node is at ⌊n/2⌋
of array-based heap please? We know that total vertices of heap is given by Ceil(n/2^(h+1))
. The leaves number is Ceil(n/2)
, so hope the extra details would help solving the question.
ANSWER
Answered 2021-Aug-22 at 08:06In a zero-based indexed array, the root is at index 0. The children of the root are at index 1 and 2. In general the children of index 𝑖 are at indices 2𝑖+1 and 2𝑖+2.
So for a node to have children, we must have that 2𝑖+1 (i.e. the left child) is still within the range of the array, i.e. 2𝑖+1 < 𝑛, where 𝑛 is the size of the array (i.e. the last index is at 𝑛-1).
Which is the index of the first leaf? That would be the least value of 𝑖 for which 2𝑖+1 < 𝑛 is not true, i.e. when 2𝑖+1 ≥ 𝑛. From that we can derive that:
All indices that represent leaves are grouped together. They are all at the 𝑖, for which 2𝑖+1 ≥ 𝑛
The least index of a leaf is at index 𝑖 = ⌊𝑛/2⌋.
If you are working with a one-based indexed array (as is common in pseudo code), then you can adapt the above reasoning to derive that the first leaf is at index 𝑖 = ⌊𝑛/2⌋+1.
So the answer you quote is assuming a 1-based indexed array, and then it is correct to say that the first leaf is positioned at ⌊𝑛/2⌋+1, and any leaf is at a position ⌊𝑛/2⌋+𝑘, where 𝑘≥1.
QUESTION
I came upon the recursive relation for the max-heapify algorithm when going through CLRS. My teacher had justified, quite trivially in fact, that the time complexity of the max-heapify process was O(logn), simply because the worst case is when the root has to 'bubble/float down' from the top all the way to the last level. This means we travel layer by layer, and hence the number of steps equals the number of levels/height of the heap, which, as we know, is bounded by logn. Fair enough.
The same however was proven in CLRS in a more rigorous manner via a recurrence relation. The worst case was said to occur when the last level is half filled and this has already been explained here. So as far as I understand from that answer, they arrived at this conclusion mathematically: we want to maximise the size of the left subtree relative to the heap size n i.e to maximise the value of L/n. And to achieve this we have to have the last level half filled so that the number of nodes in L (left subtree) is maximized and L/n is maximized.
Adding any more nodes to the last level will increase the number of nodes but bring no change to the value of L. So L/n will decrease, as heap becomes more balanced. All is fine and good as long as it's mathematical.
Now this is where I get stuck: Let's say I add one more node in this half-filled level. Practically, I fail to see how this somehow reduces the number of steps/comparisons that occur and is no longer the worst case. Even though I have added one more node, all the comparisons occur only in the left subtree and have nothing to do with the right subtree. Can someone convince me/help me realise why and how exactly does it work out that L/n has to be maximized for the worst case? I would appreciate an example input and how adding more nodes no longer makes it the worst possible case?
...ANSWER
Answered 2021-May-22 at 14:50Let's say I add one more node in this half-filled level. Practically, I fail to see how this somehow reduces the number of steps/comparisons that occur and is no longer the worst case . Even though I have added one more node, all the comparisons occur only on the left subtree and has nothing to do with the right subtree.
It is correct that this does not reduce the number of steps. However, when we speak of time complexity, we look for a relation between the number of steps and 𝑛. If were to only look at the number of steps, we would only conclude that the worst case happens when the tree is infinitely large. Although that is a really bad case (the worst), that is not what the book means with "worst case" here. It is not just the number of steps that interests us, it is how that number relates to 𝑛.
We can argue about the terminology here, because usually "worst case" is not about something that depends on 𝑛, but about variations that can exist for a given 𝑛. For instance, when discussing worst case scenarios for a sorting algorithm, the worst and best cases are dependent on how the input data is organised (already sorted, reversed, ...etc). Here "worst case" is used for the shape of the (bottom layer of the) tree, which is directly inferred by the value of 𝑛. Once you have 𝑛, there is no variation possible there.
However, for the recurrence relation, we must find the formula -- in terms of 𝑛 -- that gives an upper limit for the number of children in the left subtree, with the constraint that we want this formula to use simple arithmetic (for example: no flooring).
Here is a graph where the blue bars represent the value of 𝑛, and the orange bars represent the number of nodes in the left subtree.
The recurrence relation is based on the idea that the greatest subtree of both subtrees is the left subtree, so it represents the worst case. That subtree has a number of nodes that is somewhere between (𝑛-1)/2 and 2𝑛/3. The ratio between the number of nodes in the left subtree and the total number of nodes is maximised when the left subtree is full, and the right subtree has a lesser height.
Here is the same data represented as a ratio:
You can see where these maxima occur: when 𝑛 is 2, 5, 11, 23, ... the ratio between the number of nodes in the left subtree and 𝑛 approaches 40%. This 40% represents the upper limit for the ratio, and is a safe "catch-all" for all values of 𝑛.
We need this ratio in the recurrence relation: that 40% can be rephrased: the number of nodes in the subtree has an upper bound of 2𝑛/3. And so the recurrence relation is
𝑇(𝑛) = 𝑇(2𝑛/3) + O(1)
QUESTION
I need to find the median for a given array, having a restriction to use only heaps.
I am aware of the linear selection algorithms for finding the median. Is the following approach (based on heaps only) correct?
- build a max-heap (
h
) from the given array - build a max-heap (
h1
) from the leaves (ceil(n/2)
) of heaph
- build a min-heap (
h2
) from the internal nodes (floor(n/2)
) of heaph
- if
n
is odd returnmax(h1[0],h2[0])
else return(h1[0] + h2[0])/2
ANSWER
Answered 2021-Apr-28 at 20:41No, the algorithm you propose will not work generally. It wrongly assumes that the leaves of the max heap cannot have a value that is greater than the median. This is not true. Here is a counter example:
input: [7, 6, 3, 5, 4, 2, 1]
- build max-heap (
h
) from given array
The input happens to already be structured as a max-heap. It is:
QUESTION
I'm currently implementing some form of A* algorithm. I decided to use boost's fibonacci heap as underlying priority queue.
My Graph is being built while the algorithm runs. As Vertex object I'm using:
...ANSWER
Answered 2021-Apr-24 at 19:33Okay, prepare for a ride.
- First I found a bug
- Next, I fully reviewed, refactored and simplified the code
- When the dust settled, I noticed a behaviour change that looked like a potential logic error in the code
Like I commented at the question, the code complexity is high due to over-reliance on raw pointers without clear semantics.
While I was reviewing and refactoring the code, I found that this has, indeed, lead to a bug:
QUESTION
The code is used to implement Max-Heaps using Binary Heaps, and the output is instead 1000 unwanted lines.
...ANSWER
Answered 2021-Mar-27 at 08:46In response to "they do the same thing" comment. Not quite. If getline fails the string is left untouched (in this case).
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install max-heap
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page