Barnes-Hut | A simple implementation of the Barnes-Hut quadtree algorithm | Learning library
kandi X-RAY | Barnes-Hut Summary
kandi X-RAY | Barnes-Hut Summary
A simple implementation of the Barnes-Hut quadtree algorithm
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Push a point p
- Normalize this vector
- Gets the magnitude
- Get the angle in degrees
- Gets angle
- Get angle in degreesTo degrees
- Gets angle
- Move the current position
- Add another vector
- Multiply the vector by the given scale
- Gets the delta
- Gets the current time in milliseconds
- Creates a VBO
- Main program
- Set magnitude
Barnes-Hut Key Features
Barnes-Hut Examples and Code Snippets
Community Discussions
Trending Discussions on Barnes-Hut
QUESTION
I am trying to alignment nodes, But I cant see any options how to do it, Currently my code is
...ANSWER
Answered 2019-Dec-17 at 14:25You should be able to set fixed positions in the networkgraph chart by using the initialPositions callback. To work it well you need also to set maxIterations to some small value, like 1.
See demo
QUESTION
I'm implementing a distributed version of the Barnes-Hut n-body simulation in Chapel. I've already implemented the sequential and shared memory versions which are available on my GitHub.
I'm following the algorithm outlined here (Chapter 7):
- Perform orthogonal recursive bisection and distribute bodies so that each process has equal amount of work
- Construct locally essential tree on each process
- Compute forces and advance bodies
I have a pretty good idea on how to implement the algorithm in C/MPI using MPI_Allreduce
for bisection and simple message passing for communication between processes (for body transfer). And also MPI_Comm_split
is a very handy function that allows me to split the processes at each step of ORB.
I'm having some trouble performing ORB using parallel/distributed constructs that Chapel provides. I would need some way to sum (reduce) work across processes (locales in Chapel), splitting processes into groups and process-to-process communication to transfer bodies.
I would be grateful for any advice on how to implement this in Chapel. If another approach would be better for Chapel that would also be great.
...ANSWER
Answered 2019-Mar-30 at 18:39After a lot of deadlocks and crashes I did manage to implement the algorithm in Chapel. It can be found here: https://github.com/novoselrok/parallel-algorithms/tree/75312981c4514b964d5efc59a45e5eb1b8bc41a6/nbody-bh/dm-chapel
I was not able to use much of the fancy parallel equipment Chapel provides. I relied only on block distributed arrays with sync
elements. I also replicated the SPMD model.
In main.chpl
I set up all of the necessary arrays that will be used to transfer data. Each array has a corresponding sync
array used for synchronization. Then each worker is started with its share of bodies and the previously mentioned arrays. Worker.chpl
contains the bulk of the algorithm.
I replaced the MPI_Comm_split
functionality with a custom function determineGroupPartners
where I do the same thing manually. As for the MPI_Allreduce
I found a nice little pattern I could use everywhere:
QUESTION
I was under the assumption that if I write my code in Cython using the nogil
directive, that would indeed bypass the gil and I could use a ThreadPoolExecutor
to use multiple cores. Or, more likely, I messed up something in the implementation, but I can't seem to figure out what.
I've written a simple n-body simulation using the Barnes-Hut algorithm, and would like to do the lookup in parallel:
...ANSWER
Answered 2018-Mar-01 at 11:25I was missing a crucial part that actually signaled to release the GIL:
QUESTION
I have implemented an N-body simulation using the Barnes-Hut optimisation in Python which runs at a not-unacceptable speed for N=10,000 bodies, but it's still too slow to watch real time.
A new frame is generated each time-step, and to display the frame we must first calculate the new positions of the bodies, and then draw them all. For N=10,000, it takes about 5 seconds to generate one frame (this is waaay too high as Barnes-Hut should be giving better results). The display is done through the pygame module.
I would thus like to record my simulation and replay it once after it's done at a higher speed.
How can I accomplish this without slowing down the program or exceeding memory limitations?
One potential solution is simply to save the pygame screen each timestep, but this is apparently very slow.
I thought also about storing the list of positions of the bodies generated each time step, and then redrawing all the frames once the situation finishes. Drawing a frame still takes some time, but not as much time as it takes to calculate the new positions.
...ANSWER
Answered 2017-Sep-26 at 21:19You're comparing pure python to various programs which call into compiled code somewhere. Pure python is orders of magnitude slower than code produced by optimizing compilers. Putting the language wars aside, there are cases python performs incredibly fast for a scripting language, and there are cases where it performs slowly.
Many of the demanding python projects I've made have required the use of numpy/pandas/scipy
, or interpreters such as Pypy in order to compile some python code for a pretty immediate improvement to their execution speed. Compilers tend to produce faster code because they can do optimizations offline rather than trying to perform them with the time pressure of runtime.
A video file is the most versatile and easy to manage format for playback, but does require a bit of glue code. To make one, you need a library to encode your visualization frames into video frames. It seems you are already able to generate images per frame, so the only step remaining is to find a video codec.
FFMPEG can be called through its commandline interface to dump your frames into a video file: http://zulko.github.io/blog/2013/09/27/read-and-write-video-frames-in-python-using-ffmpeg/
The example code for writing is:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install Barnes-Hut
You can use Barnes-Hut like any standard Java library. Please include the the jar files in your classpath. You can also use any IDE and you can run and debug the Barnes-Hut component as you would do with any other Java program. Best practice is to use a build tool that supports dependency management such as Maven or Gradle. For Maven installation, please refer maven.apache.org. For Gradle installation, please refer gradle.org .
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page