gpu.js | GPU Accelerated JavaScript | GPU library

 by   gpujs JavaScript Version: 2.16.0 License: MIT

kandi X-RAY | gpu.js Summary

kandi X-RAY | gpu.js Summary

gpu.js is a JavaScript library typically used in Institutions, Learning, Education, Hardware, GPU, WebGL applications. gpu.js has no vulnerabilities, it has a Permissive License and it has medium support. However gpu.js has 3 bugs. You can install using 'npm i gpu.js' or download it from GitHub, npm.

Creates a GPU accelerated kernel transpiled from a javascript function that computes a single element in the 512 x 512 matrix (2D array). The kernel functions are ran in tandem on the GPU often resulting in very fast computations! You can run a benchmark of this here. Typically, it will run 1-15x faster depending on your hardware. Matrix multiplication (perform matrix multiplication on 2 matrices of size 512 x 512) written in GPU.js:.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              gpu.js has a medium active ecosystem.
              It has 14602 star(s) with 688 fork(s). There are 252 watchers for this library.
              OutlinedDot
              It had no major release in the last 12 months.
              There are 186 open issues and 408 have been closed. On average issues are closed in 221 days. There are 21 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of gpu.js is 2.16.0

            kandi-Quality Quality

              gpu.js has 3 bugs (0 blocker, 0 critical, 3 major, 0 minor) and 1 code smells.

            kandi-Security Security

              gpu.js has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              gpu.js code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              gpu.js is licensed under the MIT License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              gpu.js releases are available to install and integrate.
              Deployable package is available in npm.
              Installation instructions, examples and code snippets are available.
              gpu.js saves you 1850 person hours of effort in developing the same functionality from scratch.
              It has 4083 lines of code, 0 functions and 421 files.
              It has low code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi has reviewed gpu.js and discovered the below as its top functions. This is intended to give you an instant insight into gpu.js implemented functionality, and help decide if they suit your requirements.
            • Generates a glTF kernel string .
            • Provides a emitter for debugging
            • Serialize the kernel .
            • Flatten the AST tree
            • Decorates the kernel .
            • tap - side extension
            • Switches the given kernel with the specified args
            • Convert an argument to a string
            • Run the kernel .
            • Listen for extension .
            Get all kandi verified functions for this library.

            gpu.js Key Features

            No Key Features are available at this moment for gpu.js.

            gpu.js Examples and Code Snippets

            Usage
            TypeScriptdot img1Lines of Code : 6dot img1License : Permissive (MIT)
            copy iconCopy
            const renderer = GPUjsRealRenderer;
            const LineGraph = new renderer.LineGraph(options); // For example
            
            const GPU = require('gpu.js').GPU;
            const LineGraph = new require('gpujs-real-renderer').RealLineGraph({GPU: GPU /**The most important part*/, canva  
            How to pass off heavy JavaScript math operations to GPU with GPU.js
            JavaScriptdot img2Lines of Code : 39dot img2License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            pow should be Math.pow()
            
            let x, y, z should be declare on there own
            
            let x = 0
            let y = 0
            let z = 0
            
            const { GPU } = require('gpu.js')
            const gpu = new GPU()
            
            const tmp = gpu.createKernel(func

            Community Discussions

            QUESTION

            Nodejs GPU.js slower using GPU than using CPU
            Asked 2020-Dec-19 at 14:45

            I have run a benchmark to compare the use of CPU and GPU in nodejs with GPU.js. The NVidia icon shows GPU use in the first console timer, but it is slower than the CPU (second timer).

            ...

            ANSWER

            Answered 2020-Dec-19 at 14:45

            this isn't the way to benchmarking CPU vs GPU

            1. the GPU got warmup time so if you really want to benchmark compare both of them on a 1000 execution and not single execution

            2. GPU won't always be faster it depends on the task and the GPU RAM Size

            3. and finally as Keith Mention at the comment gpu works better then cpu in parallel small task and large batches

            Source https://stackoverflow.com/questions/65370419

            QUESTION

            How to pass off heavy JavaScript math operations to GPU with GPU.js
            Asked 2020-Sep-26 at 08:23

            Background
            I've built a little web based application that pops up windows to display your webcam(s). I wanted to add the ability to chroma key your feed and have been successful in getting several different algorithms working. The best algorithm I have found however is very resource intensive for JavaScript; single threaded application.

            Question
            Is there a way to offload the intensive math operations to the GPU? I've tried getting GPU.js to work but I keep getting all kinds of errors. Here is the functions I would like to have the GPU run:

            ...

            ANSWER

            Answered 2020-Sep-25 at 01:10

            QUESTION

            Does cudaMallocManaged() create a synchronized buffer in RAM and VRAM?
            Asked 2020-Sep-16 at 15:13

            In an Nvidia developer blog: An Even Easier Introduction to CUDA the writer explains:

            To compute on the GPU, I need to allocate memory accessible by the GPU. Unified Memory in CUDA makes this easy by providing a single memory space accessible by all GPUs and CPUs in your system. To allocate data in unified memory, call cudaMallocManaged(), which returns a pointer that you can access from host (CPU) code or device (GPU) code.

            I found this both interesting (since it seems potentially convenient) and confusing:

            returns a pointer that you can access from host (CPU) code or device (GPU) code.

            For this to be true, it seems like cudaMallocManaged() must be syncing 2 buffers across VRAM and RAM. Is this the case? Or is my understanding lacking?

            In my work so far with GPU acceleration on top of the WebGL abstraction layer via GPU.js, I learned the distinct performance difference between passing VRAM based buffers (textures in WebGL) from kernel to kernel (keeping the buffer on the GPU, highly performant) and retrieving the buffer value outside of the kernels to access it in RAM through JavaScript (pulling the buffer off the GPU, taking a performance hit since buffers in VRAM on the GPU don't magically move to RAM).

            Forgive my highly abstracted understanding / description of the topic, since I know most CUDA / C++ devs have a much more granular understanding of the process.

            • So is cudaMallocManaged() creating synchronized buffers in both RAM and VRAM for convenience of the developer?
            • If so, wouldn't doing so come with an unnecessary cost in cases where we might never need to touch that buffer with the CPU?
            • Does the compiler perhaps just check if we ever reference that buffer from CPU and never create the CPU side of the synced buffer if it's not needed?
            • Or do I have it all wrong? Are we not even talking VRAM? How does this work?
            ...

            ANSWER

            Answered 2020-Sep-16 at 15:13

            So is cudaMallocManaged() creating synchronized buffers in both RAM and VRAM for convenience of the developer?

            Yes, more or less. The "synchronization" is referred to in the managed memory model as migration of data. Virtual address carveouts are made for all visible processors, and the data is migrated (i.e. moved to, and provided a physical allocation for) the processor that attempts to access it.

            If so, wouldn't doing so come with an unnecessary cost in cases where we might never need to touch that buffer with the CPU?

            If you never need to touch the buffer on the CPU, then what will happen is that the VA carveout will be made in the CPU VA space, but no physical allocation will be made for it. When the GPU attempts to actually access the data, it will cause the allocation to "appear" and use up GPU memory. Although there are "costs" to be sure, there is no usage of CPU (physical) memory in this case. Furthermore, once instantiated in GPU memory, there should be no ongoing additional cost for the GPU to access it; it should run at "full" speed. The instantiation/migration process is a complex one, and what I am describing here is what I would consider the "principal" modality or behavior. There are many factors that could affect this.

            Does the compiler perhaps just check if we ever reference that buffer from CPU and never create the CPU side of the synced buffer if it's not needed?

            No, this is managed by the runtime, not compile time.

            Or do I have it all wrong? Are we not even talking VRAM? How does this work?

            No you don't have it all wrong. Yes we are talking about VRAM.

            The blog you reference barely touches on managed memory, which is a fairly involved subject. There are numerous online resources to learn more about it. You might want to review some of them. here is one. There are good GTC presentations on managed memory, including here. There is also an entire section of the CUDA programming guide covering managed memory.

            Source https://stackoverflow.com/questions/63922776

            QUESTION

            How to send data from a dynamic php table inside an anchor tag to another page
            Asked 2019-Nov-01 at 18:01

            How to send data from a dynamic php table inside an anchor tag to another page(description_of_gpu.php) and also how to retrieve that data on the new page.

            I am trying to send the variable generated inside the table-data tag to another page, which is in itself retrieved from "MySQL" database. Once I get that data, I will search through my database with the same "NameOfVcard" and display it's discription.

            Code inside "gpu_table_page.php"

            ...

            ANSWER

            Answered 2019-Nov-01 at 18:01

            As the PHP is using double-quotes around the strings you can embed the variable within this string without needing to escape it so you can modify

            Source https://stackoverflow.com/questions/58660631

            QUESTION

            How to import GPU.js?
            Asked 2019-Mar-28 at 16:56

            I'm trying to use GPU.js in Vue. I don't get how I have to import it though. I'm trying this right now:

            ...

            ANSWER

            Answered 2019-Mar-28 at 16:27
            import { GPU } from "gpu.js";
            const gpu = new GPU();
            

            Source https://stackoverflow.com/questions/55402071

            QUESTION

            How can I implement nearest neighbor interpolation in a JavaScript matrix operation?
            Asked 2018-Sep-11 at 02:10

            The following code uses GPU.js, a wrapper for WebGL that makes it easy to run matrix operations with WebGL by simply writing JS functions, I render an image on the canvas, but I want to resize it. I've read about nearest neighbor interpolation but I'm confused on how to implement it. I've already set up the resize kernel, all that's left to be done is interpolation logic.

            Notes:

            • the current indexes are available within the kernel function as this.thread.x, this.thread.y, and this.thread.z, depending on the dimensions of the matrix your kernel is computing.

            • You'll notice the canvas is sized weird. This is a "feature" of GPU.js related to WebGL texture handling (I think they're planning on ironing that out later).

            • Edit: Made progress but not quite perfected: http://jsfiddle.net/0dusaytk/59/

            ...

            ANSWER

            Answered 2018-Sep-11 at 02:10

            I added the second canvas with pixelated render in order to compare this implementation with the browser css default method.

            Demo: https://codepen.io/rafaelcastrocouto/pen/pOaaEd

            Source https://stackoverflow.com/questions/52265356

            QUESTION

            gpus.js (webgl?) float32 issue
            Asked 2018-Sep-05 at 20:09

            I'm probably missing something obvious, but i'm experimenting with gpu.js and getting some strange results. I just want to make sure i'm not doing something obviously stupid (which is likely).

            Not sure if this is an issue with what i'm doing, or the way in which calculations are performed when done via gpu.js using WebGL.

            I create a new GPU and new kernel:

            ...

            ANSWER

            Answered 2017-Sep-04 at 15:12

            An IEEE 754 single precision (32 bit) floating point value consists of 24 significant bits and 8 exponent bits.

            4294967295 is 0xffffffff (integral) which can't be stored in a 32 bit float with the full accuracy, because it only has 24 significant bits.
            4294967296 is 0x100000000 (integral) which can be stored in a 32bit float, because it is 0x4f800000 (floating point).

            In compare, an IEEE 754 double precision (64 bit) floating point value consists of 53 significant bits and 11 exponent bits.
            Therefore, a 64 bit floating point value, can store the value 4294967295 exactly (0x41efffffffe00000).

            Source https://stackoverflow.com/questions/46038047

            QUESTION

            gpu decimal precision
            Asked 2018-May-31 at 07:06

            I have created a mandelbrot set in javascript which uses the gpu but because javascript decimals are not so accurate when I zoom in to much the screen goes pixely. If i were programing it on the cpu it would not be so hard but because I am using gpu.js I cant use strings and therefore no decimal libraries I know.

            I only want to increase the accuracy, not make it endless.

            Is there any way to create a more precise float with multiple floats (I can not use strings because of the library's limitation) so that I can:

            • multiply
            • add
            • use powers

            image of the pixlation

            ...

            ANSWER

            Answered 2018-May-31 at 07:06

            googling : gpu and "arbitrary precision" shows IMHO that this problem is not solved now.

            So you can:

            • write your own library
            • use CPU arbitrary precision libraries

            Source https://stackoverflow.com/questions/50592700

            QUESTION

            Can I skip the normal syntax and work with geometry buffers in three.js for faster performance?
            Asked 2018-May-31 at 07:04

            I'm new to the area of geometry generation and manipulation and I'm planning on doing this on an intricate and large scale. I know the basic way of doing this is like it's shown in the answer to this question..

            ...

            ANSWER

            Answered 2018-May-31 at 07:04

            As ThJim01 mentioned in the comment, THREE.BufferGeometry is the way to go, but if you insist on using THREE.Geometry to initialize your list of triangles, you can use the BufferGeometry.fromGeometry function to generate the BufferGeometry from the Geometry you originally made.

            Source https://stackoverflow.com/questions/50610102

            QUESTION

            How can I properly write this shader function in JS?
            Asked 2018-May-31 at 06:11
            What I want to happen:

            For testing a game art style I thought of, I want to render a 3D world in pixel-art form. So for example, take a scene like this (but rendered with certain coloring / style so as to look good once pixelated):

            And make it look something like this:

            By playing with different ways of styling the 3D source I think the pixelated output could look nice. Of course to get this effect one just sizes the image down to ~80p and upscales it to 1080p with nearest neighbor resampling. But it's more efficient to render straight to an 80p canvas to begin with and just do the upscaling.

            This is not typically how one would use a shader, to resize a bitmap in nearest neighbor format, but the performance on it is better than any other way I've found to make such a conversion in real time.

            My code:

            My buffer for the bitmap is stored in row major, as r1, g1, b1, a1, r2, g2, b2, a2... and I'm using gpu.js which essentially converts this JS func into a shader. My goal is to take one bitmap and return one at larger scale with nearest-neighbor scaling, so each pixel becomes a 2x2 square or 3x3 and so on. Assume inputBuffer is a scaled fraction of size of the output determined by the setOutput method.

            ...

            ANSWER

            Answered 2018-May-24 at 06:26

            I think the function should look like:

            Source https://stackoverflow.com/questions/50430575

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install gpu.js

            On Linux, ensure you have the correct header files installed: sudo apt install mesa-common-dev libxi-dev (adjust for your distribution).

            Support

            Numbers1d,2d, or 3d Array of numbers Arrays of Array, Float32Array, Int16Array, Int8Array, Uint16Array, uInt8ArrayPre-flattened 2d or 3d Arrays using 'Input', for faster upload of arrays Example: const { input } = require('gpu.js'); const value = input(flattenedArray, [width, height, depth]);HTML ImageArray of HTML ImagesVideo Element New in V2! To define an argument, simply add it to the kernel function like regular JavaScript.
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            Install
          • npm

            npm i gpu.js

          • CLONE
          • HTTPS

            https://github.com/gpujs/gpu.js.git

          • CLI

            gh repo clone gpujs/gpu.js

          • sshUrl

            git@github.com:gpujs/gpu.js.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link