RayMarching | Основной код с которым можно поиграться находится в

 by   ArtemOnigiri C# Version: Current License: MIT

kandi X-RAY | RayMarching Summary

kandi X-RAY | RayMarching Summary

RayMarching is a C# library. RayMarching has no bugs, it has no vulnerabilities, it has a Permissive License and it has low support. You can download it from GitHub.

Основной код с которым можно поиграться находится в RayMarching.shader с 31 по 133 строки, все остальное - вспомогательное, чтобы шейдеры правильно работали с камерой Unity.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              RayMarching has a low active ecosystem.
              It has 70 star(s) with 10 fork(s). There are 9 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              There are 0 open issues and 2 have been closed. On average issues are closed in 6 days. There are no pull requests.
              It has a neutral sentiment in the developer community.
              The latest version of RayMarching is current.

            kandi-Quality Quality

              RayMarching has no bugs reported.

            kandi-Security Security

              RayMarching has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.

            kandi-License License

              RayMarching is licensed under the MIT License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              RayMarching releases are not available. You will need to build from source code and install.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of RayMarching
            Get all kandi verified functions for this library.

            RayMarching Key Features

            No Key Features are available at this moment for RayMarching.

            RayMarching Examples and Code Snippets

            No Code Snippets are available at this moment for RayMarching.

            Community Discussions

            QUESTION

            Three.js - Scaling a plane to full screen
            Asked 2020-Dec-10 at 22:15

            I am adding a plane to the scene like this:

            ...

            ANSWER

            Answered 2020-Sep-13 at 16:35

            You can easily achieve such a fullscreen effect by using the following setup:

            Source https://stackoverflow.com/questions/63872740

            QUESTION

            Wobble in volumetric fixed step raymarching
            Asked 2020-Jan-17 at 17:43

            I'm running into a bug with my shaders. For the vertex:

            ...

            ANSWER

            Answered 2020-Jan-17 at 16:14

            The issue is caused, because (from OpenGL ES Shading Language 1.00 Specification - 4.3.5 Varying)

            varying variables are set per vertex and are interpolated in a perspective-correct manner over the primitive being rendered.

            To make your algorithm work, you have to interpolate the direction of the ray noperspective.
            Since GLSL ES 1.00 does not provide Interpolation qualifiers you have to find a workaround.

            Compute the ray in the fragment shader:

            Source https://stackoverflow.com/questions/59786805

            QUESTION

            Flow Control in HLSL
            Asked 2019-Nov-16 at 04:06

            I recently read this paper about raymarching clouds (careful it´s an PDF, in case you dont want that: http://www.diva-portal.org/smash/get/diva2:1223894/FULLTEXT01.pdf) where the author goes on about optimizing (page 22ff.) the algorithm via reprojection. He states that by raymarching only 1/16th of all pixels per frame (the selected pixel hopping around in a 4x4 grid) and reprojecting the rest, he got about 10 times a performance increase.

            I now tried implementing this as well in Unreal Engine 4 (custom HLSL shader) and I got the raymarching as well as the reprojection working now. However I´m stuck at actually only running the raymarching on the necessary pixels. As far as I´m aware with any branching in HLSL both sides of the branch will be calculated and one will be thrown away. I therefore can´t do something like this pseudo code in the pixel shader: if(!PixelReprojection) { return 0;} else { return Raymarch(...); } as it will calculate the Raymarch even for pixels that are getting reprojected.

            I don´t see any other way though to archieving this... Is there any kind of branching in HLSL that allows this? It can´t be static as the pixels subjected to raymarching and reprojecting change every frame. I´m really curious on any ideas how the author could have achieved this tenfold increase in performance as he is writing the code on a GPU too, as far as I´m aware.

            I´d greatly appreciate any kind of input here.

            Regards, foodius

            ...

            ANSWER

            Answered 2019-Nov-16 at 04:06

            TLDR: use the attribute [branch] in front of your if-statement.

            As far as I´m aware with any branching in HLSL both sides of the branch will be calculated and one will be thrown away

            This is actually not fully correct. Yes, a branch can be flattened, which means that both sides are calculated as you described, but it can also be not flattened (called dynamic branching).

            Now, not flattening a branch has some disadvantages: If two threads in the same wave take different paths in the branch, a second wave has to be spawned, because all threads in a wave have to run the same code (so some threads would be moved to the newly spawned wave). Therefore, in such a case, a lot of threads are "disabled" (meaning they run the same code as the other threads in their wave, but not actually writing anything into memory). Nonetheless, this dynamic kind of branching may still be faster than running both sides of the branch, but this depends on the actual code.

            One can even remove this disadvantes by smart shader design (namely, ensure that the threads that take one side of the branch are in the same wave, so no divergence happens inside a wave. This, however, requires some knowledge of the underlying hardware, like wave size and so on)

            In any case: If not stated otherwise, the HLSL compiler decides on its own, whether a branch uses dynamic branching or is flattened. One can, however, enforce one of the two ways by adding an attribute to the if-statement, eg:

            Source https://stackoverflow.com/questions/58884776

            QUESTION

            Rendering compute-shader output to screen
            Asked 2019-Sep-22 at 18:16

            I'm trying to set up a pipeline to do some raymarching-rendering.

            At first I set up a vertex and a geometry shader to take just 1 arbitrary float, and make a quad so I can use just the fragment shader, and input data via passing it through all shaders or uniforms.

            But then I came across the compute shader, and found some tutorials but all just did nearly the same, making a quad and rendering compute-shaders output to it, I think that's pretty stupid, if you have the possibility to render how you want with the compute shader and still do tricks to get your result.

            After some further research i found the function 'glFramebufferImage2D' and as far as I understood it, it attaches an ImageTexture (the one I wrote to with my compute shader in my case) to the Framebuffer (the buffer thats displayed as far as i understood it). So i dont have to do the quad generating trick. But the code I wrote just shows a black screen. Did i get something wrong? or did i missed something in my code?

            This is my code: (I'm not worrying about warnings and detaching shader-programs yet. I just wanted to test the concept for now.)

            main.rs

            ...

            ANSWER

            Answered 2019-Sep-21 at 22:19

            There is an issue when you generate the texture. The initial value of TEXTURE_MIN_FILTER is NEAREST_MIPMAP_LINEAR. If you don't change it and you don't create mipmaps, then the texture is not "complete".
            Set the TEXTURE_MIN_FILTER to either NEAREST or LINEAR to solve the issue:

            Source https://stackoverflow.com/questions/58042393

            QUESTION

            How to use/Convert C# class in Alea?
            Asked 2019-May-17 at 10:16

            I'm working on a RayMarching Program(HomeWork) and I want it to go faster so I use the GPU with the ALEA extension. I have a problem because I can't Use The class camera In the parallel for (GPU). Thanks for your help.

            I already tried to change the tag of the class and creating them inside the Parallel for

            ...

            ANSWER

            Answered 2019-May-17 at 10:16

            You should set some variables to be the camera's variables and use them in the parallel for loop, you should also make static versions of the camera's functions and use them in the for loop. while I'm not sure this will fix it, it's something I think you should try because you said you can't use the class camera in the parallel for and now you won't be using it in there.

            Source https://stackoverflow.com/questions/55958819

            QUESTION

            The most effective way of computing distance to sphere in CUDA?
            Asked 2019-May-02 at 21:08

            I'm raymarching Signed Distance Fields in CUDA and the scene I'm rendering contains thousands of spheres (spheres have their location stored in device buffer, so my SDF function iterates through all of the spheres for each pixel).

            Currently, I'm computing distance to sphere surface as:

            sqrtf( dot( pos - sphere_center, pos - sphere_center ) ) - sphere_radius

            With the sqrt() function, the rendering took about 250ms for my scene. However, when I removed the call to sqrt() and left just dot( pos - sphere_center, pos - sphere_center ) - sphere_radius, the rendering time dropped to 17ms (and rendering black image).

            The sqrt() function seems to be the bottleneck so I want to ask if there is a way I can improve my rendering time (either by using different formula that does not use square root or different rendering approach)?

            I'm already using -use-fast-math.

            Edit: I've tried formula suggested by Nico Schertler, but it didn't work in my renderer. Link to M(n)WE on Shadertoy.

            ...

            ANSWER

            Answered 2019-May-02 at 21:08

            (Making my comment into an answer since it seems to have worked for OP)

            You're feeling the pain of having to compute sqrt(). I sympathize... It would be great if you could just, umm, not do that. Well, what's stopping you? After all, the square-distance to a sphere is a monotone function from $R^+$ to $R^+$ - hell, it's actually a convex bijection! The problem is that you have non-squared distances coming from elsewhere, and you compute:

            Source https://stackoverflow.com/questions/55930232

            QUESTION

            GPU ray casting (single pass) with 3d textures in spherical coordinates
            Asked 2019-Apr-17 at 14:36

            i' am implementing an algorithm of volume rendering "GPU ray casting single pass". For this, i used a float array of intensity values as 3d textures ( this 3d textures describes a regular 3d grid in spherical coordinates ).

            Here there are example of array values:

            ...

            ANSWER

            Answered 2019-Apr-13 at 16:32

            I do not know what and how are you rendering. There are many techniques and configurations which can achieve them. I am usually using a single pass single quad render covering the screen/view while geometry/scene is passed as texture. As you have your object in a 3D texture then I think you should go this way too. This is how its done (Assuming perspective, uniform spherical voxel grid as a 3D texture):

            1. CPU side code

              simply render single QUAD covering the scene/view. To make this more simple and precise I recommend you to use your sphere local coordinate system for camera matrix which is passed to the shaders (it will ease up the ray/sphere intersections computations a lot).

            2. Vertex

              here you should cast/compute the ray position and direction for each vertex and pass it to the fragment so its interpolated for each pixel on the screen/view.

              So the camera is described by its position (focal point) and view direction (usually Z- axis in perspective OpenGL). The ray is casted from the focal point (0,0,0) in camera local coordinates into the znear plane (x,y,-znear) also in camera local coordinates. Where x,y is the pixel screen position wit aspect ratio corrections applied if screen/view is not a square.

              So you just convert these two points into sphere local coordinates (still Cartesian).

              The ray direction is just substraction of the two points...

            3. Fragment

              first normalize ray direction passed from vertex (as due to interpolation it will not be unit vector). After that simply test ray/sphere intersection for each radius of the sphere voxel grid from outward to inward so test spheres from rmax to rmax/n where rmax is the max radius your 3D texture can have and n is ids resolution for axis corresponding to radius r.

              On each hit convert the Cartesian intersection position to Spherical coordinates. Convert them to texture coordinates s,t,p and fetch the Voxel intensity and apply it to the color (how depends on what and how are you rendering).

              So if your texture coordinates are (r,theta,phi)assuming phi is longitude and angles are normalized to <-Pi,Pi> and <0,2*Pi> and rmax is the max radius of the 3D texture then:

            Source https://stackoverflow.com/questions/55528378

            QUESTION

            OpenGL compare ray distance with depth buffer
            Asked 2019-Apr-11 at 20:27

            I'm trying to mix raymarching and usual geometry rendering but I don't understand how to correctly compare the distance from the ray with the depth value I stored on my buffer.

            On the raymarching side, I have rays starting from the eye (red dot). If you only render fragments at fixed ray distance 'T', you will see a curved line (in yellow on my drawing). I understand this because if you start from the ray origins Oa and Ob, and follow the direction Da and Db during T units (Oa + T * Da, and Ob + T * Db), you see that only the ray at the middle of the screen reaches the blue plane.

            Now on the geometry side, I stored values directly from gl_FragCoord.z. But I don't understand why we don't see this curved effect there. I edited this picture in gimp, playing with 'posterize' feature to make it clear.

            We see straight lines, not curved lines.

            I am OK with converting this depth value to a distance (taking into account of the linearization). But then when comparing both distance I've got problems on the side of my screen.

            There is one thing I missing with projection and how depth value are stored... I supposed depth value was the (remapped) distance from the near plane following the direction of rays starting from the eye.

            EDIT: It seems writing this question helped me a bit. I see now why we don't see this the curve effect on depth buffer: because the distance between Near and Far is bigger for the ray on the side of my screen. Thus even if the depth values are the same (middle or side screen), the distances are not.

            Thus it seems my problem come from the way I convert depth to distance. I used the following:

            ...

            ANSWER

            Answered 2017-Aug-27 at 12:04

            You're overcomplicating the matter. There is no "curve effect" because a plane isn't curved. The term z value litteraly describes what it is: the z coordinate in some euclidean coordinate space. And in such a space z=C will form a parallel to the plane spanned by the xy-Axes of that space, at distance C.

            So if you want the distance to some point, you need to take the x and y coordinates into account as well. In eye space, the camera is typically at the origin, so distance to the camera boils down to length(x_e, y_e, z_e) (which is of course a non-linear operation which would create the "curve" you seem to expect).

            Source https://stackoverflow.com/questions/45904365

            QUESTION

            Webgl - dynamic mouse position
            Asked 2017-Oct-08 at 09:44

            I am playing around with webgl copied from this page. I want to use mouse to control the light position.

            If I just send the static value from js to fragment shader it will work:

            ...

            ANSWER

            Answered 2017-Oct-08 at 09:44

            Finaly decided to put this answer (edited).

            As Gman commented, there is apprently nothing wrong with your code. Technically, this should work. However you pointed out a real tricky aspect of the WebGL/Javascript couple compared to OpenGL: The typage. And due to how Javascript variables works, you easily generate strange bugs.

            Imagine this absurde situation:

            Source https://stackoverflow.com/questions/46616344

            QUESTION

            what's wrong with it - or how to find correct THREE.PerspectiveCamera settings
            Asked 2017-Mar-13 at 11:47

            I have a simple THREE.Scene where the main content is a THREE.Line mesh that visualizes the keyframe based path that the camera will follow for some scripted animation. There is then one THREE.SphereGeometry based mesh that is always repositioned to the current camera location.

            The currently WRONG result looks like this (the fractal background is rendered independently but using the same keyframe input - and ultimately the idea is that the "camera path" visualization ends up in the same scale/projection as the respective fractal background...):

            The base is an array of keyframes, each of which represents the modelViewMatrix for a specific camera position/orientation and is directly used to drive the vertexshader for the background, e.g.:

            ...

            ANSWER

            Answered 2017-Mar-13 at 11:47

            Finally I have found one hack that works.

            Actually the problem was made up of two parts:

            1) Row- vs column-major order of modelViewMatrix: The order expected by the vertex shader is the oposite of what the remaining THREE.js expects..

            2) Object3D-hierarchy: i.e. Scene, Mesh, Geometry, Line vertices + Camera: where to put the modelViewMatrix data so that it creates the desired result (i.e. the same result that the old bloody opengl application produced): I am not happy with the hack that I found here - but so far it is the only one that seems to work:

            • I DO NOT touch the Camera.. it stays at 0/0/0
            • I directly move all the vertices of my "line"-Geometry relative to the real camera position (see "position" from the modelViewMatrix)
            • I then disable "matrixAutoUpdate" on the Mesh that contains my "line" Geometry and copy the modelViewMatrix (in which I first zeroed out the "position") into the "matrix" field.

            BINGO.. then it works. (All of my attempts to achieve the same result by rotating/displacing the Camera or by displacing/rotating any of the Object3Ds involved have failed miserably..)

            EDIT: I found a better way than updating the vertices and at least keeping all the manipulations on the Mesh level (I am still moving the world around - like the old OpenGL app would have done..). To get the right sequence of translation/rotation one can also use ("m" is still the original OpenGL modelViewMatrix - with 0/0/0 position info):

            Source https://stackoverflow.com/questions/42742302

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install RayMarching

            You can download it from GitHub.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/ArtemOnigiri/RayMarching.git

          • CLI

            gh repo clone ArtemOnigiri/RayMarching

          • sshUrl

            git@github.com:ArtemOnigiri/RayMarching.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link