optix | Build flexible , self-documenting command line interfaces
kandi X-RAY | optix Summary
kandi X-RAY | optix Summary
Optix is a lightweight framework to build flexible, self-documenting command line interfaces with a minimum amount of code. It supports nested sub-commands (e.g. git remote show), auto-generates help-screens and provides a wide range of argument-types and validations.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Parse command line arguments
- Define a command .
- Iterates over the arguments from the arguments .
- Resolve options for a specific option
- display a command
- Add arguments to the command
- Print a debug command
- Runs command .
- Run the CLI command
- Send a shell command
optix Key Features
optix Examples and Code Snippets
Community Discussions
Trending Discussions on optix
QUESTION
I need to do the following steps on two columns -A and B- of my df and output the result in C:
...ANSWER
Answered 2021-Sep-28 at 07:43Here's a solution using regular expressions, assuming that df
is the name of the dataframe.
So the idea is simple, if B has something in A, replace it with B's value. Else return string B + A.
QUESTION
I try to remove duplicates on rows but I need to have strings with length <= 2 and integer.
I have a sentence like this:
...ANSWER
Answered 2021-Sep-27 at 21:03Here. Not the best code, but it get's the job done - it passes the test.
QUESTION
I'm studying the OptixTriangle
example of OptiX 7.3
. However, the meaning of the parameters of optixTrace
are not clear:
ANSWER
Answered 2021-May-19 at 08:33In
OptixTriangle
the variablesp0
,p1
andp2
carry the color calculated by the closest hit program back to the ray generation program. OptiX calls the concept ray payload. It's a mechanism to attach up to 8 values to a ray and pass them along the program pipeline where each program being called can read and write the payload values. More on this is in the OptiX Programming Guide.In case of a triangle primitive being hit (as in
OptixTriangle
) you have to obtain the triangle coordinates from the acceleration structure and apply some vector algebra on them to calculate the normal of one of the vertices. The normal at the hit point is the same as for any triangle vertex: they all share the same plane. To get hit point coordinates anyway the OptiX API provides the barycentric coordinates of the intersection via primitive attributes to the hit programs.
To translate this into code you won't get around a deeper understanding of OptiX 7. A good point to start is How to Get Started with OptiX 7 as it actually walks you step by step through the OptixTriangle
example. To follow-up visit the GitHub repo Siggraph 2019/2020 OptiX 7/7.3 Course Tutorial Code. Walking through the examples makes a steep learning curve. The 5th example therein shows normal calculation.
QUESTION
I am wondering if it is possible in Cuda or Optix to accelerate the computation of the minimum and maximum value along a line/ray casted from one point to another in a 3D volume.
If not, is there any special hardware on Nvidia GPU's that can accelerate this function (particularly on Volta GPUs or Tesla K80's)?
...ANSWER
Answered 2021-Apr-22 at 16:25The short answer to the title question is: yes, hardware accelerated ray casting is available in CUDA & OptiX. The longer question has multiple interpretations, so I'll try to outline the different possibilities.
The different axes of your question that I'm seeing are: CUDA vs OptiX, pre-RTX GPUs vs RTX GPUs (e.g., Volta vs Ampere), min ray queries vs max ray queries, and possibly surface representations vs volume representations.
pre-RTX vs RTX GPUs:
To perhaps state the obvious, a K80 or a GV100 GPU can be used to accelerate ray casting compared to a CPU, due to the highly parallel nature of the GPU. However, these pre-RTX GPUs don't have any hardware that is specifically dedicated to ray casting. There are bits of somewhat special purpose hardware not dedicated to ray casting that you could probably leverage in various ways, so up to you to identify and design these kinds of hardware acceleration hacks.
The RTX GPUs starting with the Turing architecture do have specialized hardware dedicated to ray casting, so they accelerate ray queries even further than the acceleration you get from using just any GPU to parallelize the ray queries.
CUDA vs OptiX:
CUDA can be used for parallel ray tracing on any GPUs, but it does not currently (as I write this) support access to the specialized RTX hardware for ray tracing. When using CUDA, you would be responsible for writing all the code to build an acceleration structure (e.g. BVH) & traverse rays through the acceleration structure, and you would need to write the intersection and shading or hit-processing programs.
OptiX, Direct-X, and Vulkan all allow you to access the specialized ray-tracing hardware in RTX GPUs. By using these APIs, one can achieve higher speeds with lower power requirements, and they also require much less effort because the intersections and ray traversal through an acceleration structure are provided for you. These APIs also provide other commonly needed features for production-level ray casting, things like instancing, transforms, motion blur, as well as a single-threaded programming model for processing ray hits & misses.
Min vs Max ray queries:
OptiX has built-in functionality to return the surface intersection closest to the ray origin, i.e. a 'min query'. OptiX does not provide a similar single query for the furthest intersection (which is what I assume you mean by "max"). To find the maximum distance hit, or the closest hit to a second point on your ray, you would need to track through multiple hits and keep track of the hit that you want.
In CUDA you're on your own for detecting both min and max queries, so you can do whatever you want as long as you can write all the code.
Surfaces vs Volumes:
Your question mentioned a "3D volume", which has multiple meanings, so just to clarify things:
OptiX (+ DirectX + Vulkan) are APIs for ray tracing of surfaces, for example triangles meshes. The RTX specialty hardware is dedicated to accelerating ray tracing of surface based representations.
If your "3D volume" is referring to a volumetric representation such as voxel data or a tetrahedral mesh, then surface-based ray tracing might not be the fastest or most appropriate way to cast ray queries. In this case, you might want to use "ray marching" techniques in CUDA, or look at volumetric ray casting APIs for GPUs like NanoVDB.
QUESTION
I'm trying to learn how to implement OptiX into my C++ project. One of the first steps is to get the current CUDA context with cuCtxGetCurrent(&some_CUcontext_variable)
, however I'm getting a compile time error saying that I've made an undefined reference to cuCtxGetCurrent
.
Here's what I have:
- I'm following code from this repo to learn about OptiX and I'm on example 2 (where you get the CUDA context).
- In my code (
main.cpp
) I have includedcuda_runtime.h
,device_launch_parameters.h
,optix.h
, andoptix_stubs.h
, but I'm still getting the error at compile time. - Interestingly, my IDE, JetBrains' CLion, is not showing any undefined reference errors/warnings inline. Errors only show up when I compile.
- In my
CMakeLists.txt
, I've usedfind_package(CUDAToolkit REQUIRED)
to get CUDA. I then usedtarget_link_libraries{ ... CUDA::cudart}
to link in CUDA.
I believe this error is linker related, so I'm assume I'm missing something in my CMakeLists, but I don't know what. Please let me know how I can fix this issue!
Thank you in advanced for your help!
Update #2: SolvedIt's moments like this make make me pull my hair out: all I had to do with literally put cuda
in my target link libraries. Not -lcuda
or CUDA::cuda
, just cuda
. Somehow that linked in the drivers and it looks to be compiling now.
CMakeLists.txt
.
Sorry for the lack of code in my original post. I was trying to avoid pasting large chunks of arbitrary code.
...ANSWER
Answered 2021-Feb-19 at 21:01As @talonmies notes, CUDA has two (official) host-side APIs: The "CUDA Runtime API" and the "CUDA Driver API"; you can read about the difference between them here.
You have mentioned files and CMake identifiers relating to the Runtime API: cuda_runtime.h
, CUDA::cudart
. But - "CUDA Contexts" are concepts of the Driver API, and cuCtxGetCurrent()
etc. are driver API calls.
Specifically, an "undefined reference" is indeed a linker error. In your case, you need to link with the CUDA driver. As a library, on Linux systems, that's called libcuda.so
. For that to happen, and for your executable named ERPT_Render_Engine
, you need to add the command:
QUESTION
There is a problem with run Unity project on Hyper-V virtual machine. To make a long story short, my Unity project is working on my PC, but doesn't work on a VM. I described this in detail here:
https://stackoverflow.com/q/65550732/5709159.
I found a crash log where Unity wrote everything. Because there is a restriction on number of chars that I can post on stack overflow I uploaded the full file here: https://drive.google.com/file/d/1xAtTUytNGH7WFSSIr8WGotCDrvQKW9f-/view, and here I just posted the last part of this file:
...ANSWER
Answered 2021-Jan-13 at 06:17The execution fails when it tries to enable or access a dedicated graphics card/driver:
QUESTION
I'm trying to make an angular web component by following this guide: https://www.codementor.io/blog/angular-web-components-8e4n0r0zw7
It works but when I have to inject services
into my component, nothing works anymore, if I write (private service: DataService
) in the constructor I have a blank screen and no console errors.
ANSWER
Answered 2020-Nov-13 at 18:17constructor(injector: Injector) {}
ngDoBootstrap(): void {
const el = createCustomElement(OptixCalendarComponent, {injector});
customElements.define('optix-calendar', el);
}
QUESTION
I would like to have a template class (e.g. float
/double
type), but I am using Nvidia CUDA and OptiX and have multiple other types (e.g. float2
, double2
, float3
,...) that depend on the chosen template type.
Something like this:
...ANSWER
Answered 2020-Jul-07 at 07:44Implement a meta-function using template specialization that maps standard C++ types to OptiX types with the desired "rank":
QUESTION
I am new to c++, this question might be silly to you.
I am using Network Optix Video management service. Using their application I am building a plugin.
I am using below code snippet to create metadata object packet.
...ANSWER
Answered 2020-Apr-23 at 06:53Seems pretty simple, you need to move push_back
so that it is inside your loop, not after your loop.
Something like this
QUESTION
I'm working on a thermal application in Optix, I want to import a GLTF file and then start rays from each primitive.
I don't fully understood the documentation regrading "front" and "back" for faces stored in gltf. Is there a correct way to calculate the outward facing normal using the three vertexes of a triangle and using the cross-product on the sides? If my code works as intended I sometimes get an inward and sometimes an outward normal. Or is the order in which the verteces are stored arbitrary and thus the orientation might inverse? Also for each face there are 3 "normals" stored, if I understand that correctly. If all my faces are flat/not curved, should I be able to use any of these without even having to compute anything?
Thanks for your help in advance!
...ANSWER
Answered 2020-Mar-09 at 18:55In glTF, normals are stored per vertex. The normals for faces are typically calculated during rasterization (hardware) in most graphics pipelines, as a linear interpolation between vertex normals for that face.
For what it's worth, glTF does specify a counter-clockwise rotation in primitive.indices. However, materials in glTF are allowed to be double-sided, and the specification for Double Sided indicates that when viewing the back face of a surface, one must flip the normal before evaluating the lighting.
In practice, sometimes authoring tools produce polygon meshes that are inside-out, or even have thin-walled (non-manifold, or non-water-tight) geometry. In such cases it can be difficult to say what is "inside" vs. "outside" as they look the same, and the winding order may be arbitrary. (In Blender Edit Mode, click menu Mesh -> Normals -> Recalculate Outside to fix this).
For single-sided materials, the winding order should not be arbitrary, as the back sides are hidden from view. One would expect counter-clockwise winding order triangles when using those materials. In Blender, this can be done by selecting the "Eevee" engine, bringing up the material properties panel, and putting a checkmark on "Backface culling". Make sure this happens in the material settings, NOT in the viewport settings, otherwise the glTF exporter won't find it. Use "Material Preview" viewing mode to see the result.
If all my faces are flat/not curved, should I be able to use any of these without even having to compute anything?
Yes, if you have completely flat faces, I would expect the vertex normals within each face to be equal, such that linear interpolation would not produce any changes to the normal vector.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install optix
On a UNIX-like operating system, using your system’s package manager is easiest. However, the packaged Ruby version may not be the newest one. There is also an installer for Windows. Managers help you to switch between multiple Ruby versions on your system. Installers can be used to install a specific or multiple Ruby versions. Please refer ruby-lang.org for more information.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page