Raycaster | : feelsgood : A simple raycasting experiment
kandi X-RAY | Raycaster Summary
kandi X-RAY | Raycaster Summary
A simple raycaster, written in C++. Learned a lot from about the algorithm here. I wrote this a while back, just keeping it around for archival purposes. It was the first "working" raycaster I made. Texture support is broken, if you want a better example check out Rustcaster.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of Raycaster
Raycaster Key Features
Raycaster Examples and Code Snippets
Community Discussions
Trending Discussions on Raycaster
QUESTION
I have set up a point cloud with the intention of changing a vertex colour when the point is clicked. I have worked out how to set up the vertex colours and the index required but it just does not seem to add up as no colour ever changes and I can't seem to make sense of the index values I am getting.
...ANSWER
Answered 2021-Jun-02 at 14:51I have figured this out now and there were a couple of things interacting to mess it up for me.
- The position set up was good and the correct index appears to be indexByThree (0,3,6 ...)
- After trying them both I have finally used
window.inner[Width | Height]
for aspect etc - Setting the colour change through the geometry object seems to be the way to update correctly
this.geometry.attributes.color.setXYZ( ... )
works as I have three colour values - Not forgetting to set
needsUpdate()
where required - Adjust the css so that the canvas is set to the full screen size
This last one was my last problem (causing the pick to change the colour of the wrong point) and was caused by mat-dialog-container having a css rule for padding set to 24px. Now the docs recommend not to remove this but it is the only way I have got this to work. I think the padding was included in the co-ords for picking so was always out by 24px when it selected the points.
QUESTION
I tried to get the index of the particle I crossed with my mouse cursor while using raycaster. however, When I log intersects, it returns a millions of object even if I only came across one point.
I could access to the exact match only when I go with index zero of the intersects array.
And I have to pass the params.points.threshold to get the intersects.For test, I put 0.1 as a experiment. But I don't know the meaning of it quite clearly.
Does anyone know the reason?
My code is as below.
Setting raycaster and pass in threshold parameter.
...
ANSWER
Answered 2021-Jun-01 at 14:20A Point
has no volume in 3D space, even though it looks like it does when rendered. It is literally a single point in space with no dimensions.
The chances of an interactively-defined ray hitting a single point in space are very VERY small. So, you need a threshold to tell Raycaster
how "fuzzy" you want the intersections to be against points. According to the docs, this threshold is in world units.
Now, based on that, your threshold should be small enough to pick points within the space you have defined, without returning a large number of them. But if that's not working, then I suggest adding a minimal reproducible example to your question, with far fewer points, standard materials, etc.
QUESTION
I am working on my Raycaster engine for some time, that I am runing on slower machines. The most challenging problem I occures was/is the efficient floor and ceiling casting.
My question is: what other faster approached can I use? (I am not sure how Doom floors and ceilings are rendered)
So far I tried two typical solutions:
- verical and horizontal - casting as described in well know lodev tutorial: https://lodev.org/cgtutor/raycasting2.html
The horizontal approach is of course much faster, but I additionally optimized it with fixed point variables.
Unfortunately even that approach is a performance killer - quite big fps drop even on faster cpus, and an slower cpus its a bottle neck.
My other ideas:
- I figure out an algorithm that was converting visible floor/ceiling map tiles to quads that I splitted to two triangles - and rasterized them as in regular scanline rasterizers. It was much faster - also I could sorted tiles by texture id to be more cache friendly. Unfortunately I got into "perspective correction texture mapping" in that case - to fix this I must add some divisions, that will lower the performacnce.. but also there are some optimalizations that can be done..
using horizontal casting with every 2 ray (in column, row or both) - i will fill the blank spaces with averaged texture coords
I could also try to combine my algorithm from 1 point with horizontal casting - I could sort the textures by ID then for example, I think that there would be no texture distortions
mode 7 ?
my progres so far: https://www.youtube.com/watch?v=u3zA2Wh0NB4
EDIT (1):
The Floor and Ceiling rednering code (based od lodev tutorial the horizontal approach) but optimized with fixed point. Ceil calculations are mirrored to floor.
https://lodev.org/cgtutor/raycasting2.html
This approach is faster than the vertical approach, butlots of calculations is inner loop and random accesing to texture pixels hits the performance..
...ANSWER
Answered 2021-May-27 at 10:11I will refer my ray cast engine so here some stuff that will help you understand it. Lets start with class declarations:
QUESTION
I have created a three.js element to show a number of points on the screen and I have been tasked with clicking on two and calculating the distance between them. I am doing this in a Angular (8) app and have all the points visible and mouse events (pointerup/down) set up correctly. My idea is to ray trace from the mouse point when clicked and highlight a vertex (I do only have points no lines or faces). So I have attempted to set up Three.js RayTRacing on my scene but every time I call setFromCamera the camera is undefined even though I srill have the points visible on the screen at all times.
...ANSWER
Answered 2021-May-26 at 14:20It appears that I need arrow functions to set up the eventListener otherwise this is local to the function.
QUESTION
So I have an event signature like this:
...ANSWER
Answered 2021-May-20 at 11:26The delegate signature must match, so: you can't cheat here. If you genuinely need to unsubscribe later, you can store the delegate instance somewhere, i.e.
QUESTION
My binary gltf file(modelled in blender and animated using mixamo) is not detecting on raycast. I read bunch of tutorials and questions about it to try to fix it but it does not work what so ever:(
...ANSWER
Answered 2021-May-11 at 09:11The problem is that you are performing ray casting against a skinned mesh. And three.js
is currently (r128
) not able to compute proper bounding volumes for this type of 3D object. Bounding volumes however are important for ray casting since they are used to detect early outs.
The workaround for this issue is to manually define bounding volumes so they properly enclose the skinned mesh. I suggest you traverse through gltf.scene
and set the boundingSphere
and boundingBox
property of the skinned mesh's geometry.
More information at GitHub here: https://github.com/mrdoob/three.js/pull/19178
QUESTION
I created a class that helps me to show a 3D model using A-Frame. In this class, there are some spheres created at runtime ad inserted into the scene. I'm trying to add an event listener (I have to show a message when those spheres are clicked)
Here is the code:
...ANSWER
Answered 2021-May-06 at 16:40Using the setAttribute("pointer-handler", "")
approach is absolutely valid and a correct way of doing what you want to achieve.
I think it may be the click
event that's causing problems. I suggest you replace it with mouseup
and mousedown
events.
Also make sure you can indeed fire the events - you need a cursor attached to your camera
Working examplerun in full screen (top right corner after running snippet to see the whole scene - close is also in the top right).
You can add new elements by pressing the button in the top left. See how click sometimes is fired and sometimes not at all.
QUESTION
I have this piece of script that I made and I get this error and I don't understand why
...ANSWER
Answered 2021-Apr-29 at 12:40The problem is the short-circuit evaluation of C#. When terms are combined with ||
(OR), the second term is only evaluated, if the first one returns false, because otherwise, the outcome is known without evaluating it.
Therefore, change the code to:
QUESTION
I made a voxel raycaster in Unity using a compute shader and a texture. But at 1080p, it is limited to a view distance of only 100 at 30 fps. With no light bounces yet or anything, I am quite disappointed with this performance.
I tried learning Vulkan and the best tutorials are based on rasterization, and I guess all I really want to do is compute pixels in parallel on the GPU. I am familiar with CUDA and I've read that is sometimes used for rendering? Or is there a simple way of just computing pixels in parallel in Vulcan? I've already got a template Vulkan project that opens a blank window. I don't need to get any data back from the GPU just render straight to the screen after giving it data.
And with the code below would it be significantly faster in Vulkan as opposed to a Unity compute shader? It has A LOT of if/else statements in it which I have read is bad for GPUs but I can't think of any other way of writing it.
EDIT: I optimized it as much as I could but it's still pretty slow, like 30 fps at 1080p.
Here is the compute shader:
...ANSWER
Answered 2021-Apr-04 at 10:11Compute shader is what it is: a program that runs on a GPU, be it on vulkan, or in Unity, so you are doing it in parallel either way. The point of vulkan, however, is that it gives you more control about the commands being executed on GPU - synchronization, memory, etc. So its not neccesseraly going to be faster in vulkan than in unity. So, what you should do is actually optimise your shaders.
Also, the main problem with if/else is divergence within groups of invocations which operate in lock-step. So, if you can avoid it, the performance impact will be far lessened. These may help you with that.
If you still want to do all that in vulkan...
Since you are not going to do any of the triangle rasterisation, you probably won't need renderpasses or graphics pipelines that the tutorials generally show. Instead you are going to need a compute shader pipeline. Those are far simplier than graphics pipelines, only requiring one shader and the pipeline layout(the inputs and outputs are bound via descriptor sets).
You just need to pass the swapchain image to the compute shader as a storage image in a descriptor (and of course any other data your shader may need, all are passed via descriptors). For that you need to specify VK_IMAGE_USAGE_STORAGE_BIT
in your swapchain creation structure.
Then, in your command buffer you bind the descriptor sets with image and other data, bind the compute pipeline, and dispatch it as you probably do in Unity. The swapchain presentation and submitting the command buffers shouldn't be different than how the graphics works in the tutorials.
QUESTION
I have had problems with the OnDrop method, it is that it is not called, I was reading and maybe it has something to do with a Raycaster component, but I'm not sure, I don't even have knowledge of it, if someone could explain this to me I would greatly appreciate it.
Here is my code in c # plus an image of my hierarchy in Unity2D:
...ANSWER
Answered 2021-Apr-19 at 05:45OnDrop method is not called
To ensure the Method gets called you need to ensure that both the name and inherited class you use is correct.
As it seems you need to use override
for each function and instead of IDropHandler
and MonoBehaviour
inherit from EventTrigger
.
Example:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install Raycaster
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page