ray-tracer | A photo-realistic 3D rendering engine | Graphics library

 by   zhijian-liu C++ Version: Current License: MIT

kandi X-RAY | ray-tracer Summary

kandi X-RAY | ray-tracer Summary

ray-tracer is a C++ library typically used in User Interface, Graphics applications. ray-tracer has no bugs, it has no vulnerabilities, it has a Permissive License and it has low support. You can download it from GitHub.

This project is a photo-realistic 3D rendering engine written in C++.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              ray-tracer has a low active ecosystem.
              It has 11 star(s) with 4 fork(s). There are 4 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              There are 0 open issues and 1 have been closed. On average issues are closed in 96 days. There are no pull requests.
              It has a neutral sentiment in the developer community.
              The latest version of ray-tracer is current.

            kandi-Quality Quality

              ray-tracer has no bugs reported.

            kandi-Security Security

              ray-tracer has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.

            kandi-License License

              ray-tracer is licensed under the MIT License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              ray-tracer releases are not available. You will need to build from source code and install.
              Installation instructions are not available. Examples and code snippets are available.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of ray-tracer
            Get all kandi verified functions for this library.

            ray-tracer Key Features

            No Key Features are available at this moment for ray-tracer.

            ray-tracer Examples and Code Snippets

            No Code Snippets are available at this moment for ray-tracer.

            Community Discussions

            QUESTION

            Build dependency in library or executable section of cabal file?
            Asked 2020-Jun-09 at 17:23

            First off, I'm new to using cabal and external packages with Haskell.

            I'm trying to use the Graphics.Gloss package inside of MyLib. I can make it work if I include gloss in both the build-depends of library and executable.

            Here is the relevant portion of the cabal file:

            ...

            ANSWER

            Answered 2020-Jun-09 at 17:23

            You can make it work. There is a problem in your current set up, which is that the files for the library and the executable are in the same directory. See also this question How to avoid recompiling in this cabal file? which is a symptom of the same underlying problem: when you build the executable, it rebuilds MyLib from scratch (which requires the gloss dependency) instead of reusing your library that was already built.

            Source https://stackoverflow.com/questions/62286703

            QUESTION

            Why does push_back give a segmentation error?
            Asked 2020-May-09 at 12:57

            About the project:

            I am working on an Opengl ray-tracer, which is capable of loading obj files and ray-trace it. My application loads the obj file with assimp and then sends all of the triangle faces (the vertices and the indices) to the fragment shader by using shader storage objects. The basic structure is about to render the results to a quad from the fragment shader.

            When I load bigger obj-s (more than 100 triangles), it took so much time for the computer to do the intersections, so I started creating a BVH-tree to speed up the process. My BVH splits up the space into two axis-aligned-bounding-boxes recursively based on the average median of the triangles faces contained in the AABB.

            I succeed to build the BVH tree structure (on CPU) and now I want to convert it to a simple array, then send it to fragment shader (to a shader storage buffer).

            Here is the method responsible for converting the BVH root node into an array:

            ...

            ANSWER

            Answered 2020-May-07 at 21:05

            Your vectors store the BvhNodes everywhere by value. This means that every time you push_back a node, its copy constructor is called, which in turn copies the children vector member inside the node, which copies its own elements etc. This basically results in complete subtrees being copied/freed every time you insert or erase a node.

            This in turn can result in memory fragmentation, which can eventually cause a vector reallocation to fail and cause a segfault.

            Without the full code, I can recommend these 2 things:

            1. Store the children as (smart) pointers instead of by value

            2. Create a custom allocator for the vectors to enable a more fine-grained debugging, and check for allocation failures

            Source https://stackoverflow.com/questions/61648217

            QUESTION

            How to convert a BVH node object into a simple array?
            Asked 2020-May-07 at 08:17

            I am working on an Opengl ray-tracer, which is capable of loading obj files and ray-trace it. My application loads the obj file with assimp and then sends all of the triangle faces (the vertices and the indices) to the fragment shader by using shader storage objects. The basic structure is about to render the results to a quad from the fragment shader.

            When I load bigger obj-s (more than 100 triangles), it took so much time for the computer to do the intersections, so I started creating a BVH-tree to speed up the process. My BVH splits up the space into two axis-aligned-bounding-boxes recursively based on the average median of the triangles faces contained in the AABB.

            I succeed to build the BVH tree structure (on CPU) and now I want to convert it to a simple array, then send it to fragment shader (to a shader storage buffer).

            This is the structure of my BVH class.

            ...

            ANSWER

            Answered 2020-May-06 at 15:48

            One way to lay out binary tree nodes in an array is: for all nodes in the tree, if a given node has array index i, its children are at indices 2i + 1 and 2i + 2 (described more fully here).

            Assuming you have a complete tree, you can write your tree to an array with a simple breadth-first traversal:

            In pseudo-code:

            Source https://stackoverflow.com/questions/61612614

            QUESTION

            Is there a simple math solution to sample a disk area light? (Raytracing)
            Asked 2019-Nov-15 at 21:08

            I'm trying to implement different types of lights in my ray-tracer coded in C. I have successfully implemented spot, point, directional and rectangular area lights.

            For rectangular area light I define two vectors (U and V) in space and I use them to move into the virtual (delimited) rectangle they form. Depending on the intensity of the light I take several samples on the rectangle then I calculate the amount of the light reaching a point as though each sample were a single spot light.

            With rectangles it is very easy to find the position of the various samples, but things get complicated when I try to do the same with a disk light. I found little documentation about that and most of them already use ready-made functions to do so. The only interesting thing I found is this document (https://graphics.pixar.com/library/DiskLightSampling/paper.pdf) but I'm unable to exploit it.

            Would you know how to help me achieve a similar result (of the following image) with vector operations? (ex. Having the origin, orientation, radius of the disk and the number of samples)

            Any advice or documentation in this regard would help me a lot.

            ...

            ANSWER

            Answered 2019-Nov-15 at 21:08

            This question reduces to:

            How can I pick a uniformly-distributed random point on a disk?

            A naive approach would be to generate random polar coordinates and transform them to cartesian coordinates:

            1. Randomly generate an angle θ between 0 and
            2. Randomly generate a distance d between 0 and radius r of your disk
            3. Transform to cartesian coordinates with x = r cos θ and y = r sin θ

            This is incorrect because it causes the points to bunch up in the center; for example:

            A correct, but inefficient, way to do this is via rejection sampling:

            1. Uniformly generate random x and y, each over [0, 1]
            2. If sqrt(x^2 + y^2) < 1, return the point
            3. Goto 1

            The correct way to do this is illustrated here:

            1. Randomly generate an angle θ between 0 and
            2. Randomly generate a distance d between 0 and radius r of your disk
            3. Transform to cartesian coordinates with x = sqrt(r) cos θ and y = sqrt(r) sin θ

            Source https://stackoverflow.com/questions/58807843

            QUESTION

            OpenGL vertex shader for pinhole camera model
            Asked 2018-Oct-22 at 13:36

            I am trying to implement a simple OpenGL renderer that simulates a pinhole camera model (as defined for example here). Currently I use the vertex shader to map the 3D vertices to the clip space, where K in the shader contains [focal length x, focal length y, principal point x, principal point y] and zrange is the depth range of the vertices.

            ...

            ANSWER

            Answered 2018-Oct-22 at 13:36

            I am trying to implement a simple OpenGL renderer that simulates a pinhole camera model.

            A standard perspective projection matrix already implements a pinhole camera model. What you're doing here is just having more calculations per vertex, which could all be pre-calculated on the CPU and put in a single matrix.

            The only difference is the z range. But a "pinhole camera" does not have a z range, all points are projected to the image plane. So what you want here is a pinhole camera model for x and y, and a linear mapping for z.

            However, your implementation is wrong. A GPU will interpolate the z linearly in window space. That means, it will calculate the barycentric coordinates of each fragment with respect to the 2D projection of the triangle of the window. However, when using a perspective projection, and when the triangle is not excatly parallel to the image plane, those barycentric coordinates will not be those the respective 3D point would have had with respect to the actual 3D primitive before the projection.

            The trick here is that since in screen space, we typically have x/z and y/z as the vertex coordinates, and when we interpolate linaerily inbetween that, we also have to interpolate 1/z for the depth. However, in reality, we don't divide by z, but w (and let the projection matrix set w_clip = [+/-]z_eye for us). After the division by w_clip, we get a hyperbolic mapping of the z value, but with the nice property that it can be linearly interpolated in window space.

            What this means is that by your use of a linear z mapping, your primitives now would have to be bend along the z dimension to get the correct result. Look at the following top-down view of the situation. The "lines" represent flat triangles, looked from straight above:

            In eye space, the view rays would all go from the origin through each pixel (we could imagine the 2D pixel raster on the near plane, for example). In NDC, we have transformed this to an orthograhic projection. The pixels still can be imagined at the near plane, but all view rays now are parallel.

            In the standard hyperbolical mapping, the point in the middle of the frustum is compressed much towards the end. However, the traingle still is flat.

            If you use a linear mapping instead, your triangle would have not to be flat any more. Look for example at the intersection point between the two traingles. It must have the same x (and y) coordinate as in the hyperbolic case, for the correct result.

            However, you only transform the vertices according to a linear z value, the GPU will still linearly interpolate the result, so in your case, you would get straight connections between your transformed points, your intersection point between the two triangles is moved, and your depth values are all wrong except for the actual vertex points itself.

            If you want to use a linear depth buffer, you have to correct the depth of each fragment in the fragment shader, to implement the required non-linear interpolation on your own. Doing so would break a lot of the clever depth test optimizations GPUs do, notably early Z and hierachical Z, so while it is possible, you'l loose some performance.

            The much better solution is: Just use a standard hyperbolic depth value. Just linearize the depth values after you read them back. Also, don't do the z Division in the vertex shader. You do not only break z this way, you also break the perspective-corrected interpolation of the varyings, so your shading will also be wrong. Let the GPU do the division, just shuffle the correct value into gl_Position.w. The GPU will internally not only do the divide, the perspective corrected interpolation also depends on w.

            Source https://stackoverflow.com/questions/52928147

            QUESTION

            How do I implement one of the std::ops:{Add, Sub, Mul, Div} operators without moving out the the arguments?
            Asked 2018-Aug-15 at 02:34

            I'm writing a ray-tracer and I want to be able to subtract my 3D vectors:

            ...

            ANSWER

            Answered 2018-Aug-14 at 16:07

            In the example, the compiler tells you why x has been invalidated by the move:

            Source https://stackoverflow.com/questions/51844745

            QUESTION

            C++ - display progress bar when using std::async
            Asked 2018-Apr-20 at 12:15

            So I'm working on a ray-tracer, and in order to reduce rendering time, I used std::async to do pixel calculations independently. I used this tutorial, and everything works great, and indeed I was able to save about 70% in rendering time.

            Still, some scenes take a while to render, and I'd like to display some-sort of progress bar. As I'm fairly a newbie to async infra, I'm not quite sure how to do that. I'd like some sort of mechanism to only print the progress from the 'main', calling', thread.

            Here the rendering loop - notice the commented lines for progress bar - which obviously should not go there:

            ...

            ANSWER

            Answered 2018-Apr-18 at 16:21

            Seems you need a lock while using std::cout , otherwise the async tasks will make a mess output( they all try to print on the console at same time). However I recommend you using GDIplus ( seems you are using in your code) to print the text instead of showing on a console window which is rather ugly.

            Source https://stackoverflow.com/questions/49897743

            QUESTION

            Subtle difference between field and property in C#
            Asked 2017-Feb-14 at 09:42

            (This is not a duplicate of What is the difference between a field and a property in C# - but a more subtle issue)

            I have an immutable class Camera (from this Ray Tracing example) which works fine:

            ...

            ANSWER

            Answered 2017-Feb-14 at 09:42

            This is one of the many issues with mutable value types and why they should be avoided althogether.

            In your first scenario:

            Source https://stackoverflow.com/questions/42221783

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install ray-tracer

            You can download it from GitHub.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/zhijian-liu/ray-tracer.git

          • CLI

            gh repo clone zhijian-liu/ray-tracer

          • sshUrl

            git@github.com:zhijian-liu/ray-tracer.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link