DS-Algorithm | 我在OJ上所做的数据结构和算法题目集及相应的解答。现在大概有400题目
kandi X-RAY | DS-Algorithm Summary
kandi X-RAY | DS-Algorithm Summary
我在OJ上所做的数据结构和算法题目集及相应的解答。现在大概有400+题目, 主要包括Leetcode、HiHoCoder、NowCoder、CCF CSP认证考试和其他OJ,其中Leetcode上做题数量最多,分类齐全,总结也更多一些。
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of DS-Algorithm
DS-Algorithm Key Features
DS-Algorithm Examples and Code Snippets
Community Discussions
Trending Discussions on DS-Algorithm
QUESTION
I have a boids flocking simulation setup. It originally worked by having every boid loop through every boid so that they all constantly know where each other are at in order to tell if they are close or far away, but then I switched to a quadtree design so that boids only have to loop through boids that are actually nearby. However, it has made virtually no improvement to the simulation's FPS. It's as if I'm still looping through every single boid.
Is there some mistake in my implementation? Repo is here, relevant code is mostly in main.js, quadtree.js, and boid.js. LIve site is here
...ANSWER
Answered 2021-May-02 at 13:02The reason why you are not seeing obvious performance gains from the Quadtree is because of the nature of your simulation. Currently, the default separation causes a large number of boids to "converge" to the same position.
Lots of objects in the same position is going to negate possible speedups due to spatial partitioning. If all the objects are in the same or near position, boids in the area a forced to check against all other boids in the area.
You can demonstrate to yourself that your Quadtree is working by watching or profiling your application with its default settings. Now turn separation up to maximum. You will see visually, or through profiling, that as the boids spread out more evenly, the FPS increases significantly. This is because the Quadtree can now prevent computations thanks to its spatial partitioning.
You can see how performance is increased in the second image. Also note, that the conjecture by another commentor that it is the construction of the Quadtree (insert
) that is taking up all the time is wrong.
While in some applications you may be able to update a Quadtree as things move around, since in this simulation every constituent moves every frame, reconstructing the Quadtree from scratch is less work, then taking every object out and reinserting it into its new position.
The advice to skip square-rooting and just use the distance-squared is good though as this will get you a bit more performance.
QUESTION
for a computer science project I had to implement Lloyds-Algorithm which seems to have worked pretty good. I'd like to visualize the iterations. Which also works kind of already:
...ANSWER
Answered 2020-Dec-15 at 21:33Make the following changes in your code-
QUESTION
I want to efficiently generate a random sample of unique (non-repeated) integers in a (closed) range [0, rnd_max]
, with each number in the range being possible to choose, and each being associated with a sample weight (the more weight, the more likely it should be that the number is chosen, with probability exactly weight[i] / sum(weight[not_taken])
to be chosen next if it's not already taken in the sample).
I see C++ has std::discrete_distribution
which can generate random weighted integers, but if I use it to generate random integers and discard repeated ones, when the sample to take is large relative to the length of the possible range, there will be a lot of failed samples which are already taken, resulting in a highly inefficient procedure. It's not clear to me if Floyd's algorithm has some extension to the case with sample weights (https://math.stackexchange.com/questions/178690/whats-the-proof-of-correctness-for-robert-floyds-algorithm-for-selecting-a-sin) - I personally cannot think of one.
It's also possible to e.g. use std::discrete_distribution
dropping the weight to zero, or performing a partial weighted shuffle like in this answer: C++. Weighted std::shuffle - but in that answer, std::discrete_distribution
is re-generated at each iteration and thus the running time becomes quadratic (it needs to cycle through the weights that are passed to it every time).
In wondering what could be an efficient weighted random sample for unique integers in C++, that would work well for varying sample sizes (e.g. from 1% to 90% of sampled numbers in the available range).
...ANSWER
Answered 2019-Aug-21 at 23:49There is a nice way to solve this problem using augmented binary search trees. It gives an O(k log n)-time algorithm for sampling k elements at random.
The idea goes like this. Let's imagine that you stash all your elements in an array, in sorted order, with each element tagged with its weight. You could then solve this problem (inefficiently) as follows:
- Generate a random number between 0 and the total weight of all elements.
- Iterate over the array until you find an element such that the random number is in the "range" spanned by that element. Here, the "range" represents the window of weights from the start of that element to the start of the next element.
- Remove that element and repeat.
If you implement this as mentioned above, each pass of picking a random element will take time O(n): you have to iterate over all the elements of the array, then remove a single element somewhere once you've picked it. That's not great; the overall runtime is O(kn).
We can slightly improve upon this idea in the following way. When storing all the elements in the array, have each element store both its actual weight and the combined weight of all elements that come before it. Now, to find which element you're going to sample, you don't need to use a linear search. You can instead use a binary search over the array to locate your element in time O(log n). However, the overall runtime of this approach is still O(n) per iteration, since that's the cost of removing the element you picked, so we're still in O(kn) territory.
However, if you store the elements not in a sorted array where each element stores the weight of all elements before it, but in a balanced binary search tree where each element stores the weight of all elements in its left subtree, you can simulate the above algorithm (the binary search gets replaced with a walk over the tree). Moreover, this has the advantage that removing an element from the tree can be done in time O(log n), since it's a balanced BST.
(If you're curious how you'd do the walk to find the element that you want, do a quick search for "order statistics tree." The idea here is essentially a generalization of this idea.)
Following the advice from @dyukha, you can get O(log n) time per operation by building a perfectly-balanced tree from the items in time O(n) (the items don't actually have to be sorted for this technique to work - do you see why?), then using the standard tree deletion algorithm each time you need to remove something. This gives an overall solution runtime of O(k log n).
QUESTION
For a university project I wanted to implement Bellard's algorithm for calculating the n-th digit of pi in Fortran. I stumbled across this question on math.stackexchange: https://math.stackexchange.com/questions/1776840/confusion-with-bellards-algorithm-for-pi
With the answer to that question I managed to implement the code, but I'm not getting a result and I can't figure out what I'm doing wrong:
...ANSWER
Answered 2018-Oct-16 at 13:37There is a bug in your code:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install DS-Algorithm
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page