WebMonkeys | Massively parallel GPU programming on JavaScript , simple | Architecture library
kandi X-RAY | WebMonkeys Summary
kandi X-RAY | WebMonkeys Summary
Allows you to spawn thousands of parallel tasks on the GPU with the simplest, dumbest API possible. It works on the browser (with browserify) and on Node.js. It is ES5-compatible and doesn't require any WebGL extension.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Main map function
- drawscreen time
WebMonkeys Key Features
WebMonkeys Examples and Code Snippets
Community Discussions
Trending Discussions on WebMonkeys
QUESTION
There are many abstractions around WebGL for running parallel processing it seems, e.g.:
- https://github.com/MaiaVictor/WebMonkeys
- https://github.com/gpujs/gpu.js
- https://github.com/turbo/js
But I am having a hard time understanding what a simple and complete example of parallelism would look like in plain GLSL code for WebGL. I don't have much experience with WebGL but I understand that there are fragment and vertex shaders and how to load them into a WebGL context from JavaScript. I don't know how to use the shaders or which one is supposed to do the parallel processing.
I am wondering if one could demonstrate a simple hello world example of a parallel add operation, essentially this but parallel form using GLSL / WebGL shaders / however it should be done.
...ANSWER
Answered 2018-Apr-25 at 14:40First off, WebGL only rasterizes points, lines, and triangles. Using WebGL to do non rasterization (GPGPU) is basically a matter of realizing that the inputs to WebGL are data from arrays and the output, a 2D rectangle of pixels is also really just a 2D array so by creatively providing non graphic data and creatively rasterizing that data you can do non-graphics math.
WebGL is parallel in 2 ways.
it's running on a different processor, the GPU, while it's computing something your CPU is free to do something else.
GPUs themselves compute in parallel. A good example if you rasterize a triangle with 100 pixels the GPU can process each of those pixels in parallel up to the limit of that GPU. Without digging too deeply it looks like an NVidia 1080 GPU has 2560 cores so assuming they are not specialized and assuming the best case one of those could compute 2560 things in parallel.
As for an example all WebGL apps are using parallel processing by points (1) and (2) above without doing anything special.
Adding 10 to 10000 elements though in place is not what WebGL is good at because WebGL can't read from and write to the same data during one operation. In other words, your example would need to be
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install WebMonkeys
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page