Huffman-Compression | Java Implementation for Huffman Compression | Compression library
kandi X-RAY | Huffman-Compression Summary
kandi X-RAY | Huffman-Compression Summary
Java Implementation for Huffman Compression
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Iterate build
- Removes the first node from the tree
- Extend a new branch
- Creates a new branch
- Entry point for testing
- Add other node to this
- Decompress a byte array
- Compress the contents of the data
- Prints the status
- Prints tree print to stdout
- Returns the number of nodes in this subtree
- Entry point for the example
- Reads a byte
Huffman-Compression Key Features
Huffman-Compression Examples and Code Snippets
Community Discussions
Trending Discussions on Huffman-Compression
QUESTION
This may be a repeat of the question here: Predict Huffman compression ratio without constructing the tree
So basically, I have the probabilistic distribution of two datasets with the same variables but different probabilities. Now, is there any way that by looking at the variable distribution, I can to some degree confidently say that the dataset, when passed through a Huffman Coding implementation would achieve a higher compression ratio than the other?
One of the solutions that I came across was to calculate the upper bound using conditional entropy and then compute the average code length. Is there any other approach that can I can probably explore before using the said method?
Thanks a lot.
...ANSWER
Answered 2017-Jun-17 at 17:24I don't know what "to some degree confidently" means, but you can get a lower bound on the compressed size of each set by computing the zero-order entropy as done in the linked question (the negative of the sum of the probabilities times the log of the probabilities). Then the lower entropy very likely produces a shorter Huffman coding than the higher entropy. It is not definite, as I am sure that one could come up with a counter-example.
You also need to send a description of the code itself if you want to decode it on the other end, which adds a wrinkle to the comparison. However if the data is much larger than the code description, then that will be lost in the noise.
Simply generating the code, the coded data, and the code description is very fast. The best solution is to do that, and compare the resulting number of bits directly.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install Huffman-Compression
You can use Huffman-Compression like any standard Java library. Please include the the jar files in your classpath. You can also use any IDE and you can run and debug the Huffman-Compression component as you would do with any other Java program. Best practice is to use a build tool that supports dependency management such as Maven or Gradle. For Maven installation, please refer maven.apache.org. For Gradle installation, please refer gradle.org .
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page