raytracer | A toy raytracer in Rust | GPU library

 by   bheisler Rust Version: Current License: MIT

kandi X-RAY | raytracer Summary

kandi X-RAY | raytracer Summary

raytracer is a Rust library typically used in Hardware, GPU applications. raytracer has no bugs, it has no vulnerabilities, it has a Permissive License and it has low support. You can download it from GitHub.

This is a toy raytracer I wrote in rust to learn how raytracers work. I also wrote a series of posts on it starting here.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              raytracer has a low active ecosystem.
              It has 181 star(s) with 17 fork(s). There are 10 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              There are 1 open issues and 2 have been closed. On average issues are closed in 2 days. There are no pull requests.
              It has a neutral sentiment in the developer community.
              The latest version of raytracer is current.

            kandi-Quality Quality

              raytracer has 0 bugs and 0 code smells.

            kandi-Security Security

              raytracer has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              raytracer code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              raytracer is licensed under the MIT License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              raytracer releases are not available. You will need to build from source code and install.
              Installation instructions are not available. Examples and code snippets are available.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of raytracer
            Get all kandi verified functions for this library.

            raytracer Key Features

            No Key Features are available at this moment for raytracer.

            raytracer Examples and Code Snippets

            No Code Snippets are available at this moment for raytracer.

            Community Discussions

            QUESTION

            Optimize for expensive fragment shader
            Asked 2022-Mar-18 at 16:42

            I'm rendering multiple layers of flat triangles with a raytracer in the fragment shader. The upper layers have holes, and I'm looking for a way how I can avoid running the shader for pixels that are filled already by one of the upper layers, i.e. I want only the parts of the lower layers rendered that lie in the holes in the upper layers. Of course, if there's a hole or not is not known unless the fragment shader did his thing for a layer.

            As far as I understand, I cannot use early depth testing because there, the depth values are interpolated between the vertices and do not come from the fragment shader. Is there a way to "emulate" that behavior?

            ...

            ANSWER

            Answered 2022-Mar-18 at 16:42

            The best way to solve this issue is to not use layers. You are only using layers because of the limitations of using a 3D texture to store your scene data. So... don't do that.

            SSBOs and buffer textures (if your hardware is too old for SSBOs) can access more memory than a 3D texture. And you could even employ manual swizzling of the data to improve cache locality if that is viable.

            As far as I understand, I cannot use early depth testing because there, the depth values are interpolated between the vertices and do not come from the fragment shader.

            This is correct insofar as you cannot use early depth tests, but it is incorrect as to why.

            The "depth" provided by the VS doesn't need to be the depth of the actual fragment. You are rendering your scene in layers, presumably with each layer being a full-screen quad. By definition, everything in one layer of rendering is beneath everything in a lower layer. So the absolute depth value doesn't matter; what matters is whether there is something from a higher layer over this fragment.

            So each layer could get its own depth value, with lower layers getting a lower depth value. The exact value is arbitrary and irrelevant; what matters is that higher layers have higher values.

            The reason this doesn't work is this: if your raytracing algorithm detects a miss within a layer (a "hole"), you must discard that fragment. And the use of discard at all turns off early depth testing in most hardware, since the depth testing logic is usually tied to the depth writing logic (it is an atomic read/conditional-modify/conditional-write).

            Source https://stackoverflow.com/questions/71529404

            QUESTION

            Ray-triangle intersection algorithm not working
            Asked 2022-Jan-12 at 02:11

            I am writing a raytracer using Java, but I ran into an issue with intersections between rays and triangles. I am using the algorithm given at Scratchapixel, but it is not working properly.

            I am testing it using the following code:

            ...

            ANSWER

            Answered 2022-Jan-12 at 02:11

            The issue was quite simple, I had my cross product implementation wrong, and after that I had to change one line of code.

            I changed

            Source https://stackoverflow.com/questions/70674273

            QUESTION

            Can you write depth values to a depth buffer in a compute shader? (Vulkan GLSLS)
            Asked 2021-Sep-25 at 15:22

            I have a raytracer that I need to use in combination with traditional triangle projection techniques, I need to make the raytraced image be able to occlude projected triangles. The easiest way would be to write depth values directly to a depth buffer.

            Apparently imageStore can only work with color images. Is there a mechanism I can use? The only alternative is to store depth in a color image and then make a dummy shader that sets the depth in a fragment shader.

            ...

            ANSWER

            Answered 2021-Sep-25 at 15:22

            https://vulkan.gpuinfo.org/listoptimaltilingformats.php

            It would appear that most implementations don't allow using depth images as storage images. I suggest creating an extra image and copying/blitting it to the depth image.

            Source https://stackoverflow.com/questions/69322614

            QUESTION

            Raytracing reflection incorrect
            Asked 2021-Jul-08 at 14:05

            Hello I build a raytracer in java and everything works fine but if i set 3 spheres on the same z axis the reflection doesnt work and if I change the z axis from the spheres it will work fine. In the following you can see the picture. There you can see the one sphere does the reflection correctly if it is not on the same z axis.

            [Raytracer] [1]: https://i.stack.imgur.com/MSeCp.png

            In the following is my code for calculate the Intersection.

            ...

            ANSWER

            Answered 2021-Jun-28 at 15:06

            the problem was the v[]:

            Source https://stackoverflow.com/questions/68156396

            QUESTION

            Why does my metal material looks completely black on meshes?
            Asked 2021-Jun-28 at 10:42

            I was working on my ray tracer written in C++ following this series of books: Ray Tracing in One Weekend. I started working a little bit on my own trying to implement features that weren't described in the book, like a BVH tree builder using SAH, transforms, triangles and meshes.

            NOTE: The BVH implementation is based on two main resources which are this article: Bounding Volume Hierarchies and C-Ray (A ray tracer written in C).

            After I implemented all of that I noticed that there was some weirdness while trying to use some materials on meshes. For example, as the title says, the metal material looks completely black:

            In the first image you can see how the metal material should look like and in the second one you can see how it looks like on meshes.

            I spent a lot of time trying to figure out what the issue was but I couldn't find it and I couldn't find a way of tracking it.

            If you want to take a look at the code for more clarity the ray tracer is on GitHub at https://github.com/ITHackerstein/RayTracer. The branch on which I'm implementing meshes is meshes.

            To replicate my build environment I suggest you follow this build instructions:

            $ git clone https://github.com/ITHackerstein/RayTracer

            $ cd RayTracer

            $ git checkout meshes

            $ mkdir build

            $ cd build

            $ cmake ..

            $ make

            $ mkdir tests

            At this point you're almost ready to go except you need the TOML scene file an the OBJ file I'm using which are these two:

            Download them and place them in the build/tests and after that make sure you are in the build folder and run it using the following command:

            $ ./RayTracer tests/boh.toml

            After it finishes running you should have a tests/boh.ppm file which is the resulting image file stored using PPM format. If you don't have a software that let's you open it there are multiple viewers online.

            NOTE: My platform is Linux, I didn't test it on Windows or Mac OS.

            EDIT

            Does the mesh work with other materials?

            So as you can in the first image and especially in the second one we have we have some darker rectangular spots, and also the lighting seems kinda messed up. In the third image you have an idea of how it works on a normal primitive.

            ...

            ANSWER

            Answered 2021-Jun-28 at 10:42

            I finally figured it out thanks to the tips that @Wyck gave me. The problem was in the normals, I noticed that the Metal::scatter method received a normal that was almost zero. So that's why it was returning black. After some logging, I found out that the Instance::intersects_ray method was not normalizing the transformed normal vector, and that's what caused the issue. So, in the end, the solution was simpler than I thought it would be.

            Source https://stackoverflow.com/questions/68133817

            QUESTION

            Copy a VkImage after TraceRaysKHR to CPU
            Asked 2021-Jun-06 at 09:08

            Copying a VkImage that is being used to render to an offscreen framebuffer gives a black image.

            When using a rasterizer the rendered image is non-empty but as soon as I switch to ray tracing the output image is empty:

            ...

            ANSWER

            Answered 2021-Jun-06 at 09:08

            Resolved by now: When submitting the command buffer to the queue it would require an additional vkQueueWaitIdle(m_queue) since ray tracing finishes with a certain latency

            Source https://stackoverflow.com/questions/67765292

            QUESTION

            fasm x64 windows gdi programming struggles - call to stretchdibits not painting screen as expected
            Asked 2021-Jun-02 at 00:38

            I have a simple fasm program, in this program I get some zeroed memory from windows through VirtualAlloc. I then have a procedure where I simply set up the parameters and make a call to StretchDIBits passing a pointer to the empty memory buffer. I therefore expect the screen should be drawn black. This however is not the case, and I can't for the life of me figure out why.

            Below is the code.

            ...

            ANSWER

            Answered 2021-May-31 at 06:32

            I'm sorry I don't know much about fasm, I tried to reproduce the problem through C++:

            Source https://stackoverflow.com/questions/67766028

            QUESTION

            What is the difference between ray tracing, ray casting, ray marching and path tracing?
            Asked 2021-May-02 at 08:31

            As far as I know, all the techniques mentioned in the title are rendering algorithms that seem quite similar. All ray based techniques seem to revolve about casting rays through each pixel of an image which are supposed to represent rays of real light. This allows to render very realistic images.

            As a matter of fact I am making a simple program that renders such images myself based on Raytracing in one Weekend.

            Now the thing is that I wanted to somehow name this program. I used the term “ray tracer” as this is the one used in the book.

            I have heard a lot of different terms however and I would be interested to know what exactly is the difference between ray tracing, ray matching, ray casting, path tracing and potentially any other common ray-related algorithms. I was able to find some comparisons of these techniques online, but they all compared only two of these and some definitions overlapped, so I wanted to ask this question about all four techniques.

            ...

            ANSWER

            Answered 2021-May-02 at 08:31

            My understanding of this is:

            1. ray cast

              is using raster image to hold the scene and usually stop on first hit (no reflections and ray splitting) and does not necessarily cast ray on per pixel basis (usually per row or column of screen). The 3D version of this is called Voxel space ray cast however the map is not voxel space instead 2 raster images RGB,Height are used.

              For more info see:

            2. (back) ray trace

              This usually follows physical properties of light so ray split in reflected and refracted and we stop usually after some number of hits. The scene is represented either with BR meshes or with Analytical equations or both.

              for more info see:

              the back means we cast the rays from camera to scene (on per pixel basis) instead of from light source to everywhere ... to speed up the process a lot at the cost of wrong lighting (but that can be remedied with additional methods on top of this)...

            The other therms I am not so sure as I do not use those techniques (at least knowingly):

            1. path tracing

              is optimization technique to avoid recursive ray split in ray trace using Monte Carlo (stochastic) approach. So it really does not split the ray but chose randomly between the 2 options (similarly how photons behave in real world) and more rendered frames are then blended together.

            2. ray marching

              is optimization technique to speed up ray trace by using SDF (signed distance function) to determine safe advance along the ray so it does not hit anything. But it is confined only to analytical scene.

            Source https://stackoverflow.com/questions/67347177

            QUESTION

            Transparent/noisy spheres when using simple diffuse calculations
            Asked 2021-Feb-13 at 07:35

            I've been trying to write a raytracer but I came across a problem when trying to implement simple diffuse calculations (trying to replicate the first ones from Ray Tracing in One Weekend but without a guide)

            Here's the relevant code:

            Intersection/diffuse calculations:

            ...

            ANSWER

            Answered 2021-Feb-10 at 19:19

            Too lazy to debug your code however the screenshot and just a quick look at source hints accuracy problems. So try to use 64bit doubles instead of 32 bit floats...

            Intersection between ray and ellipsoid/sphere tend to be noisy on just floats... once refraction and reflection is added on top of that the noise multiplies ...

            Also sometimes helps using relative coordinates instead of absolute ones (that can make a huge impact even on floats). For more info see:

            Source https://stackoverflow.com/questions/66136360

            QUESTION

            Multi threading Raytracer
            Asked 2021-Jan-27 at 02:18

            I am making a raytracer, im trying to use pthread to divide the rendering. i noticed that isnt helping with the speed because the function pthread_join is to slow, if i use a loop to make the 'await' is way faster and works almost every time fine. But i cant use that because the time of rendering changes with the scene. Is there a way to check if a thread is finished, on a more efficient way. This is the code. `

            ...

            ANSWER

            Answered 2021-Jan-26 at 22:32

            You have a concurrency problem here in your thread function:

            Source https://stackoverflow.com/questions/65909659

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install raytracer

            You can download it from GitHub.
            Rust is installed and managed by the rustup tool. Rust has a 6-week rapid release process and supports a great number of platforms, so there are many builds of Rust available at any time. Please refer rust-lang.org for more information.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/bheisler/raytracer.git

          • CLI

            gh repo clone bheisler/raytracer

          • sshUrl

            git@github.com:bheisler/raytracer.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link

            Explore Related Topics

            Consider Popular GPU Libraries

            taichi

            by taichi-dev

            gpu.js

            by gpujs

            hashcat

            by hashcat

            cupy

            by cupy

            EASTL

            by electronicarts

            Try Top Libraries by bheisler

            criterion.rs

            by bheislerRust

            RustaCUDA

            by bheislerRust

            iai

            by bheislerRust

            TinyTemplate

            by bheislerRust

            cargo-criterion

            by bheislerRust