RayTracing | Simple ray tracing library in Python for optical design

 by   DCC-Lab Python Version: 1.3.11 License: MIT

kandi X-RAY | RayTracing Summary

kandi X-RAY | RayTracing Summary

RayTracing is a Python library typically used in Simulation applications. RayTracing has no vulnerabilities, it has build file available, it has a Permissive License and it has low support. However RayTracing has 10 bugs. You can install using 'pip install RayTracing' or download it from GitHub, PyPI.

by Prof. Daniel Côté and his group This code aims to provide a simple ray tracing module for calculating various properties of optical paths (object, image, aperture stops, field stops). It makes use of ABCD matrices and does not consider spherical aberrations but can compute chromatic aberrations for simple cases when the materials are known. Since it uses the ABCD formalism (or Ray matrices, or Gauss matrices) it can perform tracing of rays and gaussian laser beams. It is not a package to do "Rendering in 3D with raytracing".
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              RayTracing has a low active ecosystem.
              It has 169 star(s) with 31 fork(s). There are 12 watchers for this library.
              There were 1 major release(s) in the last 12 months.
              There are 1 open issues and 217 have been closed. On average issues are closed in 117 days. There are 1 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of RayTracing is 1.3.11

            kandi-Quality Quality

              OutlinedDot
              RayTracing has 10 bugs (1 blocker, 0 critical, 9 major, 0 minor) and 1184 code smells.

            kandi-Security Security

              RayTracing has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              RayTracing code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              RayTracing is licensed under the MIT License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              RayTracing releases are available to install and integrate.
              Deployable package is available in PyPI.
              Build file is available. You can build the component from source.
              Installation instructions, examples and code snippets are available.
              RayTracing saves you 4150 person hours of effort in developing the same functionality from scratch.
              It has 9482 lines of code, 1057 functions and 95 files.
              It has medium code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi has reviewed RayTracing and discovered the below as its top functions. This is intended to give you an instant insight into RayTracing implemented functionality, and help decide if they suit your requirements.
            • Report the efficiency of a FieldStop
            • Returns the marginal rays of the aperture
            • Return the effective focal lengths
            • Return the NaN of the axis
            • The axis of the histogram
            • Traces a list of rays
            • Append a Ray
            • Display the progress bar
            • Display the intensity histogram
            • Create a histogram of the ray angles
            • Display the figure
            • Traces multiple rays in parallel
            • Append a matrix
            • The effective length of the front vertex
            • Set the design parameters
            • Calculates the effective focal lengths
            • Displays the matrix
            • The effective length of the vertices
            • Generate example of the function
            • Compute the efficiency of a FieldStop
            • Traces a list of rays through the ray
            • Append a matrix to the matrix
            • Displays this figure with the given diameter
            • Generate an illumination path
            • Multiply the angle of the right axis
            • Check if the latest version of a PyPi org
            • Calculate the illumination path
            • Trace multiple rays in parallel
            • Display this figure with the given diameter
            • Displays the figure
            • Displays a list of rays
            • True if the B is an Imaging
            • The list of graphic elements of the matrix
            • Returns the resolution of the image
            • Set silent mode
            Get all kandi verified functions for this library.

            RayTracing Key Features

            No Key Features are available at this moment for RayTracing.

            RayTracing Examples and Code Snippets

            RayTracing,Getting started
            Pythondot img1Lines of Code : 55dot img1License : Permissive (MIT)
            copy iconCopy
            from raytracing import *
            
            python -m raytracing -l           # List examples
            python -m raytracing -e all       # Run all of them
            python -m raytracing -e 1,2,4,6   # Only run 1,2,4 and 6
            
            python -m raytracing -h
            
            from raytracing import *
            
            path = Imagin  
            RayTracing,Documentation
            Pythondot img2Lines of Code : 52dot img2License : Permissive (MIT)
            copy iconCopy
            python
            >>> help(Matrix)
            Help on class Matrix in module raytracing.abcd:
            
            class Matrix(builtins.object)
             |  Matrix(A, B, C, D, physicalLength=0, apertureDiameter=inf, label='')
             |  
             |  A matrix and an optical element that can transform a ray  
            RayTracing,Examples
            Pythondot img3Lines of Code : 21dot img3License : Permissive (MIT)
            copy iconCopy
            All example code on your machine is found at: /somedirectory/on/your/machine
             1. ex01.py A single lens f = 50 mm, infinite diameter
             2. ex02.py Two lenses, infinite diameters
             3. ex03.py Finite-diameter lens
             4. ex04.py Aperture behind lens acting as  

            Community Discussions

            QUESTION

            GLSL : Why are my calculated normals not working properly
            Asked 2021-Dec-24 at 16:17

            I am trying to follow the Ray Tracing in one Weekend tutorial and my normals do not look like i expect them to look.

            ...

            ANSWER

            Answered 2021-Dec-24 at 16:17

            QUESTION

            Has anyone encounter this this stripping artifact during "RayTracing in one Weekend"?
            Asked 2021-Dec-14 at 22:29

            I am trying to port the "RayTracing in One Weekend" into metal compute shader. I encounter this strip artifacts in my project:

            Is it because my random generator does not work well?

            Does anyone have a clue?

            ...

            ANSWER

            Answered 2021-Aug-30 at 02:43

            I found this link.It says that the primary ray hit point is either above or below the sphere's surface a litter bit due to the float precision error. It is a z-fighting problem

            Source https://stackoverflow.com/questions/68888277

            QUESTION

            Reverse-bit iteration in 2D
            Asked 2021-Nov-09 at 11:50

            I use this reverse-bit method of iteration for rendering tasks in one dimension, the goal being to iterate through an array with the bits of the iterator reversed so that instead of computing an array slowly from left to right the order is spread out. I use this for instance when rendering the graph of a 1D function, because this reversed bit iteration first computes values at well-spaced intervals a representative image appears only after a very small fraction of all the values are computed.

            So after only a partial rendering we already have a good idea of how the final graph will look. Now I want to apply the same principle to 2D rendering, think raytracing and such, the idea is having a good overall view of the image being rendered even from an early stage. The problem is that making the same idea work as a 2D iteration isn't trivial.

            Here's how I do it in 1D:

            ...

            ANSWER

            Answered 2021-Nov-07 at 14:17

            Reversing the bits achieves the expected effect in 1D, you could combine this shuffling technique with another one where you get the x and y coordinates be selecting the even, resp. odd, bits of the resulting number. Combining both methods in a single shuffle is highly desirable to avoid costly bit twiddling operations.

            You could also use Gray Codes to shuffle values with n significant bits into a pseudo random order. Here is a trivial function to produce gray codes:

            Source https://stackoverflow.com/questions/69872903

            QUESTION

            Get Grid Cells Overlapping Parabola
            Asked 2021-Jul-05 at 20:03

            How do I find all pixels overlapping a parabola (supercover) in an interval defined by two points on the parabola efficiently? All coordinates are integers, grid cells are 1x1 in size. The parabola is given by f(x) = ax^2 + bx where a and b are known (assume c = 0)

            I found this implementation of finding all grid cells overlapping a line. How could it be adapted to use a parabola? http://playtechs.blogspot.com/2007/03/raytracing-on-grid.html

            ...

            ANSWER

            Answered 2021-Jul-05 at 17:37

            Start from initial point, then at every step check what cell will be intersected next. This task assumes that pixels are not sizeless points but cells of defined size S. Initial cell has coordinates (0,0)

            Using equation of parabola we calculate intersection of parabola with vertical line x=S

            Source https://stackoverflow.com/questions/68258125

            QUESTION

            Why does my metal material looks completely black on meshes?
            Asked 2021-Jun-28 at 10:42

            I was working on my ray tracer written in C++ following this series of books: Ray Tracing in One Weekend. I started working a little bit on my own trying to implement features that weren't described in the book, like a BVH tree builder using SAH, transforms, triangles and meshes.

            NOTE: The BVH implementation is based on two main resources which are this article: Bounding Volume Hierarchies and C-Ray (A ray tracer written in C).

            After I implemented all of that I noticed that there was some weirdness while trying to use some materials on meshes. For example, as the title says, the metal material looks completely black:

            In the first image you can see how the metal material should look like and in the second one you can see how it looks like on meshes.

            I spent a lot of time trying to figure out what the issue was but I couldn't find it and I couldn't find a way of tracking it.

            If you want to take a look at the code for more clarity the ray tracer is on GitHub at https://github.com/ITHackerstein/RayTracer. The branch on which I'm implementing meshes is meshes.

            To replicate my build environment I suggest you follow this build instructions:

            $ git clone https://github.com/ITHackerstein/RayTracer

            $ cd RayTracer

            $ git checkout meshes

            $ mkdir build

            $ cd build

            $ cmake ..

            $ make

            $ mkdir tests

            At this point you're almost ready to go except you need the TOML scene file an the OBJ file I'm using which are these two:

            Download them and place them in the build/tests and after that make sure you are in the build folder and run it using the following command:

            $ ./RayTracer tests/boh.toml

            After it finishes running you should have a tests/boh.ppm file which is the resulting image file stored using PPM format. If you don't have a software that let's you open it there are multiple viewers online.

            NOTE: My platform is Linux, I didn't test it on Windows or Mac OS.

            EDIT

            Does the mesh work with other materials?

            So as you can in the first image and especially in the second one we have we have some darker rectangular spots, and also the lighting seems kinda messed up. In the third image you have an idea of how it works on a normal primitive.

            ...

            ANSWER

            Answered 2021-Jun-28 at 10:42

            I finally figured it out thanks to the tips that @Wyck gave me. The problem was in the normals, I noticed that the Metal::scatter method received a normal that was almost zero. So that's why it was returning black. After some logging, I found out that the Instance::intersects_ray method was not normalizing the transformed normal vector, and that's what caused the issue. So, in the end, the solution was simpler than I thought it would be.

            Source https://stackoverflow.com/questions/68133817

            QUESTION

            Three.js Camera always undefined when setting up ray tracing
            Asked 2021-May-26 at 14:20

            I have created a three.js element to show a number of points on the screen and I have been tasked with clicking on two and calculating the distance between them. I am doing this in a Angular (8) app and have all the points visible and mouse events (pointerup/down) set up correctly. My idea is to ray trace from the mouse point when clicked and highlight a vertex (I do only have points no lines or faces). So I have attempted to set up Three.js RayTRacing on my scene but every time I call setFromCamera the camera is undefined even though I srill have the points visible on the screen at all times.

            ...

            ANSWER

            Answered 2021-May-26 at 14:20

            It appears that I need arrow functions to set up the eventListener otherwise this is local to the function.

            Source https://stackoverflow.com/questions/67706101

            QUESTION

            What is the difference between ray tracing, ray casting, ray marching and path tracing?
            Asked 2021-May-02 at 08:31

            As far as I know, all the techniques mentioned in the title are rendering algorithms that seem quite similar. All ray based techniques seem to revolve about casting rays through each pixel of an image which are supposed to represent rays of real light. This allows to render very realistic images.

            As a matter of fact I am making a simple program that renders such images myself based on Raytracing in one Weekend.

            Now the thing is that I wanted to somehow name this program. I used the term “ray tracer” as this is the one used in the book.

            I have heard a lot of different terms however and I would be interested to know what exactly is the difference between ray tracing, ray matching, ray casting, path tracing and potentially any other common ray-related algorithms. I was able to find some comparisons of these techniques online, but they all compared only two of these and some definitions overlapped, so I wanted to ask this question about all four techniques.

            ...

            ANSWER

            Answered 2021-May-02 at 08:31

            My understanding of this is:

            1. ray cast

              is using raster image to hold the scene and usually stop on first hit (no reflections and ray splitting) and does not necessarily cast ray on per pixel basis (usually per row or column of screen). The 3D version of this is called Voxel space ray cast however the map is not voxel space instead 2 raster images RGB,Height are used.

              For more info see:

            2. (back) ray trace

              This usually follows physical properties of light so ray split in reflected and refracted and we stop usually after some number of hits. The scene is represented either with BR meshes or with Analytical equations or both.

              for more info see:

              the back means we cast the rays from camera to scene (on per pixel basis) instead of from light source to everywhere ... to speed up the process a lot at the cost of wrong lighting (but that can be remedied with additional methods on top of this)...

            The other therms I am not so sure as I do not use those techniques (at least knowingly):

            1. path tracing

              is optimization technique to avoid recursive ray split in ray trace using Monte Carlo (stochastic) approach. So it really does not split the ray but chose randomly between the 2 options (similarly how photons behave in real world) and more rendered frames are then blended together.

            2. ray marching

              is optimization technique to speed up ray trace by using SDF (signed distance function) to determine safe advance along the ray so it does not hit anything. But it is confined only to analytical scene.

            Source https://stackoverflow.com/questions/67347177

            QUESTION

            DXR Descriptor Heap management for raytracing
            Asked 2021-Apr-21 at 05:12

            After watching videos and reading the documentation on DXR and DX12, I'm still not sure how to manage resources for DX12 raytracing (DXR).

            There is quite a difference between rasterizing and raytracing in terms of resource management, the main difference being that rasterizing has a lot of temporal resources that can be bound on the fly, and raytracing being in need of all resources being ready to go at the time of casting rays. The reason is obvious, a ray can hit anything in the whole scene, so we need to have every shader, every texture, every heap ready and filled with data before we cast a single ray.

            So far so good.

            My first test was adding all resources to a single heap - based on some DXR tutorials. The problem with this approach arises with objects having the same shaders but different textures. I defined 1 shader root signature for my single hit group, which I had to prepare before raytracing. But when creating a root signature, we have to exactly tell which position in the heap corresponds to the SRV where the texture is located. Since there are many textures with different positions in the heap, I would need to create 1 root signature per object with different textures. This of course is not preferred, since based on documentation and common sense, we should keep the root signature amount as small as possible. Therefore, I discarded this test.

            My second approach was creating a descriptor heap per object, which contained all local descriptors for this particular object (Textures, Constants etc..). The global resources = TLAS (Top Level Acceleration Structure), and the output and camera constant buffer were kept global in a separate heap. In this approach, I think I misunderstood the documentation by thinking I can add multiple heaps to a root signature. As I'm writing this post, I could not find a way of adding 2 separate heaps to a single root signature. If this is possible, I would love to know how, so any help is appreciated.

            Here the code I'm usign for my root signature (using dx12 helpers):

            ...

            ANSWER

            Answered 2021-Jan-20 at 10:23

            Dynamic indexing of HLSL 5.1 might be the solution to this issue.

            https://docs.microsoft.com/en-us/windows/win32/direct3d12/dynamic-indexing-using-hlsl-5-1

            • With dynamic indexing, we can create one heap containing all materials and use an index per object that will be used in the shader to take the correct material at run time
            • Therefore, we do not need multiple heaps of the same type, since it's not possible anyway. Only 1 heap per heap type is allowed at the same time

            Source https://stackoverflow.com/questions/65794461

            QUESTION

            Is this "possibly-uninitialized" compiler error a false alarm? [rustc 1.51.0]
            Asked 2021-Apr-05 at 00:10

            I was hit with the possibly-uninitialized variable error, although I'm convinced that this should never be the case.
            (The rustc --version is rustc 1.51.0 (2fd73fabe 2021-03-23))

            ...

            ANSWER

            Answered 2021-Apr-05 at 00:10

            You're not the first person to see this behavior, and generally I would not consider it a bug. In your case, the condition is very simple and obvious, and it's easy for you and me to reason about this condition. However, in general, the condition does not need to be obvious and, for example, the second case could have additional conditions that are hard to reason about, even though the code is correct.

            Moreover, the kind of analysis you want the compiler to do (determine which code is reachable based on conditions) is usually only done when the compiler is optimizing. Therefore, even if the compiler had support for this kind of analysis, it probably wouldn't work in debug mode. It also doesn't work in all cases even in the best optimizing compilers and static analysis tools.

            If you have a philosophical objection to initializing a dummy value in this case, you can use an Option:

            Source https://stackoverflow.com/questions/66946858

            QUESTION

            What are the normal methods for achiving texture mapping with raytracing?
            Asked 2021-Feb-10 at 12:28

            When you create a BLAS (bottom level acceleration structures) you specify any number of vertex/index buffers to be part of the structure. How does that end up interacting with the shader and get specified in the descriptor set? How should I link these structures with materials?

            How is texture mapping usually done with raytracing? I saw some sort of "materials table" in Q2RTX but the documentation is non-existent and the code is sparsely commented.

            ...

            ANSWER

            Answered 2021-Feb-10 at 12:28

            A common approach is to use a material buffer in combination with a texture array that is addressed in the shaders where you require the texture data. You then pass the material id e.g. per-vertex or per-primitive and then use that to dynamically fetch the material, and with it the texture index. Due to the requirements for Vulkan ray tracing you can simplify this by using the VK_EXT_descriptor_indexing extension (Spec) that makes it possible to create a large and descriptor set containing all textures required to render your scene.

            The relevant shader parts:

            Source https://stackoverflow.com/questions/66129997

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install RayTracing

            The simplest way to import the package in your own scripts after installing it:. This will import Ray , GaussianBeam, and several Matrix elements such as Space, Lens, ThickLens, Aperture, DielectricInterface, but also MatrixGroup (to group elements together), ImagingPath (to ray trace with an object at the front edge), LaserPath (to trace a gaussian laser beam from the front edge) and a few predefined other such as Objective (to create a very thick lens that mimicks an objective). You create an ImagingPath or a LaserPath, which you then populate with optical elements such as Space, Lens or Aperture or vendor lenses. You can then adjust the path properties (object height in ImagingPath for instance or inputBeam for LaserPath) and display in matplotlib. You can create a group of elements with MatrixGroup for instance a telescope, a retrofocus or any group of optical elements you would like to treat as a "group". The Thorlabs and Edmund optics lenses, for instance, are defined as MatrixGroups.

            Support

            All the documentation is available online.
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            Install
          • PyPI

            pip install raytracing

          • CLONE
          • HTTPS

            https://github.com/DCC-Lab/RayTracing.git

          • CLI

            gh repo clone DCC-Lab/RayTracing

          • sshUrl

            git@github.com:DCC-Lab/RayTracing.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link