tuddy | import sync todo/story/task systems
kandi X-RAY | tuddy Summary
kandi X-RAY | tuddy Summary
Export, import sync todo/story/task systems such as GitHub, Trello, Pivotal Tracker, JIRA, Sprintly, etc.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of tuddy
tuddy Key Features
tuddy Examples and Code Snippets
Community Discussions
Trending Discussions on tuddy
QUESTION
As the title suggests this is a question about an implementation detail from HashMap#resize
- that's when the inner array is doubled in size.
It's a bit wordy, but I've really tried to prove that I did my best understanding this...
This happens at a point when entries in this particular bucket/bin are stored in a Linked
fashion - thus having an exact order and in the context of the question this is important.
Generally the resize
could be called from other places as well, but let's look at this case only.
Suppose you put these strings as keys in a HashMap
(on the right there's the hashcode
after HashMap#hash
- that's the internal re-hashing.) Yes, these are carefully generated, not random.
ANSWER
Answered 2017-Aug-02 at 14:18Order in a Map is really bad [...]
It's not bad, it's (in academic terminology) whatever. What Stuart Marks wrote at the first link you posted:
[...] preserve flexibility for future implementation changes [...]
Which means (as I understand it) that now the implementation happens to keep the order, but in the future if a better implementation is found, it will be used either it keeps the order or not.
QUESTION
A HashMap
has such a phrase from it's documentation:
If the initial capacity is greater than the maximum number of entries divided by the load factor, no rehash operations will ever occur.
Notice how the documentation says rehash, not resize - even if a rehash will only happen when a resize will; that is when the internal size of buckets gets twice as big.
And of course HashMap
provides such a constructor where we could define this initial capacity.
Constructs an empty HashMap with the specified initial capacity and the default load factor (0.75).
OK, seems easy enough:
...ANSWER
Answered 2018-Oct-08 at 21:24The line from the documentation,
If the initial capacity is greater than the maximum number of entries divided by the load factor, no rehash operations will ever occur.
indeed dates from before the tree-bin implementation was added in JDK 8 (JEP 180). You can see this text in the JDK 1.6 HashMap documentation. In fact, this text dates all the way back to JDK 1.2 when the Collections Framework (including HashMap) was introduced. You can find unofficial versions of the JDK 1.2 docs around the web, or you can download a version from the archives if you want to see for yourself.
I believe this documentation was correct up until the tree-bin implementation was added. However, as you've observed, there are now cases where it's incorrect. The policy is not only that resizing can occur if the number of entries divided by the load factor exceeds the capacity (really, table length). As you noted, resizes can also occur if the number of entries in a single bucket exceeds TREEIFY_THRESHOLD (currently 8) but the table length is smaller than MIN_TREEIFY_CAPACITY (currently 64).
You can see this decision in the treeifyBin()
method of HashMap.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install tuddy
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page