min-heap | Min Heap is a data structure | Learning library
kandi X-RAY | min-heap Summary
kandi X-RAY | min-heap Summary
min-heap
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of min-heap
min-heap Key Features
min-heap Examples and Code Snippets
Community Discussions
Trending Discussions on min-heap
QUESTION
I got a sorted min-heap array: 2,3,4,5,NULL
I'd like to insert the value 1
in it, how would the first step be dealt with?
ANSWER
Answered 2022-Mar-27 at 07:38A heap of integers does not have NULL
values. I suppose the NULL
is an indication of the capacity of the heap, meaning it can only have one more value before being full.
Would be this the same if the node was empty instead?
As unusual as this representation is, the NULL
cannot mean anything else than that the node is empty. If it is considered as data, then there is an inconsistency. The data type used for the values in a heap should be consistent, and comparable. So NULL
is not an option for data.
When you insert a next value, the first empty slot should be used for it. And then it should be sifted to its correct position.
So from:
QUESTION
This is a simple program in C.
...ANSWER
Answered 2022-Feb-24 at 04:50.text
is a directive that tells the assembler to start a program code section (the “text” section of the program, a read-only executable section containing mostly instructions to be executed). It is here because GCC without optimization always puts a .text
at the top of the file, even if it's about to switch to another section (like .bss
in this case) and then back to .text
when it's ready to emit some bytes into that section (in your case, a definition for main
). GCC does still parse the whole compilation unit before emitting any asm, though; it's not just compiling one global variable / function at a time as it goes along.
.globl a
is a directive that tells the assembler that a
is a “global” symbol, so its definition should be listed as an external symbol for the linker to link with.
.bss
is a directive that tells the assembler to start the “block starting symbol” section (which will contain data that is initialized to zero or, on some systems, mostly older, is not initialized).
.type a @object
and .size a, 1
are directives that describe the type and size of an object named a
. The assembler adds this information to the symbol table or other information in the object file it outputs. It is useful for debuggers to know about the types of objects.
a:
is label. It acts to define the symbol. As the assembler reads assembly, it counts bytes in the section it is current generated. Each data declaration or instruction takes up some bytes, and the assembler counts those. When it sees a label, it associates the label with the current count. (This is commonly called the program counter even when it is counting data bytes.) When the assembler writes information about a
to the symbol table, it will include the number of bytes it is from the beginning of the section. When the program is loaded into memory, this offset is used to calculate the address where the object a
will be in memory.
So the question is why a: is at the bottom
a:
must be after .bss
because a
will be put into the section the assembler is currently working on, so that needs to be set to the desired section before declaring the label. The location of a
relative to the other directives might be flexible, so that reordering them would have no consequence.
so I like to know is code 1 and code 2 same?
No, a:
must appear after .bss
so that it is put into the correct section.
.zero 1
says to emit 1 zero byte in the current section. Like (almost?) all directives GCC uses, it's well documented in the GNU assembler manual: https://sourceware.org/binutils/docs/as/Zero.html
so again have my gcc place
main
in.zero
No, .text
starts (or switches back to) the code section, so main
will be in the code section.
is .LFB0: a some section of program that my x86-64 processor can run
Anything ending with a colon is a label. .LFB0
is a local label the compiler is using in case it needs it as a jump or branch target.
so I like to know if I am coding in assembly can I ignore
.cfi_startproc
line.
When writing assembly for simple functions without exception handling and related features, you can ignore .cfi_startproc and other call-frame information directives that generate metadata that goes in the .eh_frame
section. (Which is not executed, it's just there as data in the file for exception handlers and debuggers to read.)
… if not needed then I can assume my program will become…
If you are omitting some of the .cfi… directives, I would omit all of them, unless you look into what they do and determine which ones can be omitted selectively.
I believe I have to save it with .s or .S extension which one s small or large S?
With GCC and Clang, assembly files ending in .S
are processed by the “preprocessor” before assembly, and assembly files ending in .s
are not. This is the preprocessor familiar from C, with #define
, #if
, and other directives. Other tools may not do this. If you are not using preprocessor features, it generally does not matter whether you use .s
or .S
.
QUESTION
I have read about memory allocation for applications and I understand that, more or less, the heap, in memory, for a given application is dynamically allocated at the startup. However, there is also another concept called a min-heap, which is a data structure organized in the form of a tree where each parent node is smaller or equal to its children.
So, my question is: What are the relationships between the heap that is allocated at startup for a given application and the min-heap data structure that includes functions commonly referred to as 'heapify' and etc? Is there any relationship or is the min-heap data structure more of a higher level programming concept? If there is no relationship, is there any reason they've been given the same name?
This may seem like a silly question for some, but it has actually stirred up a debate at work.
...ANSWER
Answered 2022-Feb-17 at 01:38heap is a data structure which is actually a complete binary tree with some extra properties. There are 2 types of Heaps:
- MIN Heap
- MAX Heap
in min heap the root has the lowest value in the tree and when you pop out the root the next lowest element comes on the top. To convert a tree into heap we use heapify algorithm. It is also know as priority queue in c++. and usually as a competitive programmer we use STL function for heap so that we dont have to get into the hustle of creating a heap from scratch. Max heap is just the opposite with largest at the root. Usually heap is used because it has a O(logN) time complexity for removing and inserting elements and hence can even work with tight constraints like 10^6.
Now i can understand you confusion between heap in memory and heap data structure but they are completely different things. Heap in data structure is just a way to store the data.
QUESTION
I am trying to implement a custom min-heap in python. I want to store tuples in the heap and order them with respect to the second element in the tuple. Here is my attempt at it:-
...ANSWER
Answered 2022-Feb-03 at 17:02There is nothing wrong with the complexity of your algorithm (note the appoximate factor of 10 of increased time when operating on 1e6 instead of 1e5 values). The standard library functions are just faster by a constant factor. That is probably because they are optimized and maybe even written in a compiled language, that can run much faster.
QUESTION
There are N
cities connected by N-1
roads.
Each adjacent pair of cities is connected by bidirectional roads i.e.
i-th
city is connected toi+1-th
city for all1 <= i <= N-1
, given as below:
1 --- 2 --- 3 --- 4...............(N-1) --- N
We got M
queries of type (c1, c2)
to disconnect the pair of cities c1 and c2.
For that we decided to block
some roads to meet all these M
queries.
Now, we have to determine the minimum number of roads that needs to be blocked such that all queries will be served.
Example :
...ANSWER
Answered 2021-Dec-10 at 14:48For each query, there is an (inclusive) interval of acceptable cut points, so the task is to find the minimum number of cut points that intersect all intervals.
The usual algorithm for this problem, which you can see here, is an optimized implementation of this simple procedure:
- Select the smallest interval end as a cut point
- Remove all the intervals that it intersects
- Repeat until there are no more intervals.
It's easy to prove that that it's always optimal to select the smallest interval end:
- The smallest cut point must be <= the smallest interval end, because otherwise that interval won't get cut.
- If an interval intersects any point <= the smallest interval end, then it must also intersect the smallest interval end.
- The smallest interval end is therefore an optimal choice for the smallest cut point.
It takes a little more work, but you can prove that your algorithm is also an implementation of this procedure.
First, we can show that the smallest interval end is always the first one popped off the heap, because nothing is popped until we find a starting point greater than a known endpoint.
Then we can show that the endpoints removed from the heap correspond to exactly the intervals that are cut by that first endpoint. All of their start points must be <= that first endpoint, because otherwise we would have removed them earlier. Note that you didn't adjust your queries into inclusive intervals, so your test says peek() <= start
. If they were adjusted to be inclusive, it would say peek() < start
.
Finally, we can trivially show that there are always unpopped intervals left on the heap, so you need that +1
at the end.
So your algorithm makes the same optimal selection of cut points. It's more complicated than the other one, though, and harder to verify, so I wouldn't use it.
QUESTION
In general, if I understand correctly, there is a difference of runtime between "heapifying;o(n)" a given list vs adding each individual element; o(lg n). Does java follow this behavior? If not below question may not be valid.
The below example appears to create a "min-heap".
...ANSWER
Answered 2021-Nov-28 at 07:08However, let say if I want to build a "max heap", but the constructor does not let me pass in a collection and comparator together. In this case, is the only to build max heap is via creating a wrapper class that implements comparable?
Yes. This class does not provide a constructor that can pass in a collection
and a comparator
at the same time. It will use the compareTo
method of the collection element, so as you did, you need a Wrapper
(But this may seem a little unnecessary?).
repeatedly call add.
You can use PriorityQueue#addAll()
.
QUESTION
I know how to sort a linked list using merge sort. The question is, why don't we just use a heap to create a sorted LinkedList?
- Traverse the linked list and keep adding items to a min-heap.
- Keep taking the items out of the heap and heapify the heap and add to a new result LinkedList.
step one will have O(n) for traversing the list and O(nlogn)
for adding items to the heap. Total O(nlogn)
[Correct me if I am wrong].
Getting an item out of heap is O(1)
adding an item as the next node in a LinkedList is O(1)
. [Correct me if this is wrong].
So the sort can be done in O(nlogn)
if my understanding is correct. This is the same as the merge sort. In terms of memory, we are using an extra heap so total memory can be O(nlogn)
I assume. Merge sort can also take O(nlogn)
but can be improved to O(logn)
.
The heap logic is the same as the "merging k sorted linked list". I am assuming each linked list has 1 item.
I might be completely wrong about my complexities on the heap version. If someone knows the exact reason why heap should not be[Why merge sort is better] used, please explain. This is not heap sort and this is not an in-place algorithm. If the time complexity is O(n²logn)
, I am not sure how.
ANSWER
Answered 2021-Nov-20 at 21:01Pushing items to the heap and popping from it are both logarithmic operations. Removing an element from the heap is logarithmic because the heap needs to adjust its elements to guarantee the heap invariant. So, you would do 2 * n * log n
, and while this is still linearithmic in Big O terms like merge sort, it is probably slower. Or let's say at least that you can do better by using a linearithmic sorting algorithm, instead of using a heap for sorting.
What you could do, and I didn't see you mentioning it, is to add all the elements in the linked list to the heap's data store in O(n)
time and then heapify it, which runs also in O(n)
if implemented correctly. Then you would have 2n + n * log n
which reduces to O(n * log n)
with better constants.
The additional space used by the heap isn't O(n log n)
. It is linear and while this depends on the heap implementation, let's say that many standard heap implementations provide that.
When merging k
sorted linked lists with a heap you are benefiting from the fact that k
is much smaller than the length of the lists themselves. This leads to a O((n1 + n2 +...) * log k)
algorithm where n1, n2,...
are the lengths of the involved lists. If you didn't use a heap, that algorithm would have run in O((n1 + n2 +...) * k)
time.
At the end of the day, if a heap is correctly implemented and used, you will have linearithmic time complexity and linear space complexity for sorting the linked list. But whatever you do, all other variables equal, a standard sorting algorithm has better constants than a heap when it comes to sorting, unless there are some special requirements and constraints. Let me reiterate, both methods are linearithmic in terms of Big O, but sorting algorithms can have better constants for this specific purpose which can mean a lot in real world applications. The reason is that a heap needs to guarantee access in O(1) time to the smallest/largest element at any time. This puts constraints on its behavior and you wouldn't be able to optimize it as much as a standard sorting algorithm when it comes to sorting.
QUESTION
Here is my implementation of Dijkstra, based on pseudocode provided in class. (This was a school assignment, but the project has already been turned in by a teammate. I'm just trying to figure out why my version does not work.)
When I create my own graph with the same graph info as the txt file, it gives me the correct output- the shortest path to each vertex from a given source. When I read in a text file, it does not. It reads in the file and prints the correct adjacency list, but does not give the shortest paths.
Here's where it goes wrong when it runs on the file: on the first iteration of relax, it updates the adjacent vertex distances and parent, but returns to the dijkstra method and the distance/parent are no longer updated. Why is that?
The provided txt file looks like this:
4
0 1,1 3,2
1 2,4
2 1,6 4,7
3 0,3 1,9 2,2
4 0,10 3,5
Sorry if this is a mess, I'm learning!
...ANSWER
Answered 2021-Nov-12 at 15:26First the structure of the file is as follows:
QUESTION
I'm having a problem with a C code that I'm making and I just can't find the root cause. What I'm trying to do is to read a text file from any directory and put all the bytes in the heap by allocating memory with calloc (to have it in 0s)
The problem is that when the file is over a certain size (>=25 KB, could be less) I'm not able to write anything in the allocated memory but I can read the file!
...ANSWER
Answered 2021-Nov-06 at 12:45Any ideas or what am I doing wrong? :(
257
getc()
returns 257 different values in the unsigned char
range and EOF
. Saving in a char
loses information and can cause a valid input byte to be misinterpreted as an EOF
signal (when char
is signed) or byteFromFile == EOF
never to be true (when char
is unsigned).
QUESTION
Consider the following operations
Build(A[1 . . . n]): Initialises the data structure with the elements of the (possibly unsorted)array A with duplicates. It runs in time O(n).
Insert(x): Inserts the element x into the data structure. It runs in runs in time O(log n).
Median: Returns the median1 of the currently stored elements. It runs in time O(1).
How can i describe a data structure i.e provide an invariant that i will maintain in this data structure? How can i write the pseudocode for Build(), Insert() and Median()?
UPDATE
Build max-heap/min-heap:
ANSWER
Answered 2021-Oct-13 at 23:24Build: Use median-of-medians to find the initial median in O(n), use it to partition the values in half. The half with the smaller values goes into a max-heap, the half with the larger values goes into a min-heap, build each in time O(n). We'll keep the two heaps equally large or differing by at most one element.
Median: The root of the bigger heap, or the mean of the two roots if the heaps have equal size.
Insert: Insert into the bigger heap, then pop its root and insert it into the smaller heap.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install min-heap
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page