voxelization | This project 's aim is to voxelize the * | 3D Animation library
kandi X-RAY | voxelization Summary
kandi X-RAY | voxelization Summary
This project's aim is to voxelize the *.ply 3D model. But only for ply files, all assimp support files can be voxelized.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Produce files
- Create voxelization from a file
- Calculate the voxel value of a mesh
- Saves a voxel
- Get the bounding box of a scene
- Read data as a coordinate array
- Reads the dimensions of a binvox header
- Read an array of values from a file - like object
- Divide list into list
voxelization Key Features
voxelization Examples and Code Snippets
>>> import voxelization
>>> voxelization.voxelization("134212_1.ply")
('Bounding box: ', 350.86337, 268.0675, 62.311089, 140.45639, 59.910782, -137.18449)
('x_edge: ', 1.0958697001139324, '\ny_edge: ', 1.0841495990753174, '\nz_edge:
Community Discussions
Trending Discussions on voxelization
QUESTION
I am working on a project where I have to implement voxel cone tracing for indirect light in C++/OpenGL. I already have a deferred renderer setup but most of the VCT examples I could find usually draw the scene once for voxelization and once with cone tracing shaders. Is it possible to run cone tracing shaders over a fullscreen quad and sample vertex data from the GBuffer or is that generally a stupid idea? Do I lose accuracy because I only have per pixel vertex data?
...ANSWER
Answered 2020-Jun-01 at 21:28Is it possible to run cone tracing shaders over a fullscreen quad and sample vertex data from the GBuffer or is that generally a stupid idea?
Yes, however that's not voxel cone tracing anymore. That's Screen-Space Global Illumination (SSGI) instead. You can think of the voxelized scene in VCT as a 3D GBuffer, which makes all the difference between "screen space" and "full scene".
Do I lose accuracy because I only have per pixel vertex data?
Absolutely. All screen space approximations suffer from the same set of artifacts. They do not account for surfaces that aren't directly visible on the screen (either out of frame or occluded by visible geometry). Most noticeably, when the camera moves and objects enter or exit the frame, the reflections on visible surfaces would also change unrealistically.
QUESTION
I am trying to write out the voxelization of a model to a Wavefront Object File.
My method is simple, and runs in reasonable time. The problem is - it produces OBJ files that are ludicrous in size. I tried to load a 1 GB file into 3D Viewer on a very respectable machine with an SSD but in some cases the delay was several seconds when trying to move the camera, in others it refused to do anything at all and effectively softlocked.
What I've done so far:
- I do not write out any faces which are internal to the model - that is, faces between two voxels which are both going to be written to the file. There's no point, as no-one can see them.
- Because OBJ does not have a widely-supported binary format (as far as I know), I found that I could save some space by trimming trailing zeros from vertex positions in the file.
The obvious space-save I don't know how to do:
- Not writing out duplicate vertices. In total, there are around 8x more vertices in the file than there should be. However, fixing this is extremely tricky because objects in Wavefront Object files do not use per-object, but global vertices. By writing out all 8 vertices each time, I always know which 8 vertices make up the next voxel. If I do not write out all 8, how do I keep track of which place in the global list I can find those 8 (if at all).
The harder, but potentially useful large space-save:
- If I could work more abstractly, there may be a way to combine voxels into fewer objects, or combine faces that lie along the same plane into larger faces. IE, if two voxels both have their front face active, turn that into one larger rectangle twice as big.
Because it's required, here's some code that roughly shows what is happening. This isn't the code that's actually in use. I can't post that, and it relies on many user-defined types and has lots of code to handle edge cases or extra functionality so would be messy and length to put up here anyways.
The only thing that's important to the question is my method - going voxel-by-voxel, writing out all 8 vertices, and then writing out whichever of the 6 sides is not neighboring an active voxel. You'll just kind of have to trust me that it works, though it does produce large files.
My question is what method or approach I can use to reduce the size further. How can I, for example, not write out any duplicate vertices?
Assumptions:
Point
is just an array of size 3, with getters like .x()Vector3D
is a 3D wrapper aroundstd::vector
, with a.at(x,y,z)
method- Which voxels are active is arbitrary and does not follow a pattern, but is known before
writeObj
is called. Fetching if a voxel is active at any position is possible and fast.
ANSWER
Answered 2020-Mar-26 at 11:11If you define a custom comparator like this:
QUESTION
I'm considering an interesting problem in which it may be possible to increase performance beyond that of a typical program by allowing kernels to write their outputs to memory without performing any synchronization.
I'm computing the voxelization from a mesh, and it is not required for the voxels on the inside of the mesh to be filled. This makes the problem simpler.
I am hoping to apply the very simple algorithm where the kernel simply computes the voxels that intersect a triangle, and dispatch the kernel on each triangle of the mesh.
My current idea is to simply have the kernel write a value to the voxels that it computes as intersecting the triangle, without applying any synchronization. It matters not to me the count of the number of triangles that a particular voxel touches, I care only that I guarantee the identification of all voxels touching any triangle.
As such the question is can I expect this simple approach to "just work" or does there exist a possible race condition in which a voxel already marked as occupied may end up getting cleared out?
If the problem is possible, then would making the store atomic (and incurring a performance hit) resolve the issue?
...ANSWER
Answered 2020-Mar-21 at 21:52From the OpenCL 1.2 specification:
3.3.1 Memory Consistency
OpenCL uses a relaxed consistency memory model; i.e. the state of memory visible to a work- item is not guaranteed to be consistent across the collection of work-items at all times. Within a work-item memory has load / store consistency. Local memory is consistent across work-items in a single work-group at a work-group barrier. Global memory is consistent across work-items in a single work-group at a work-group barrier, but there are no guarantees of memory consistency between different work-groups executing a kernel. Memory consistency for memory objects shared between enqueued commands is enforced at a synchronization point.
So if each work-item processes a single triangle, and updates the voxel-value of any voxel intersecting with that triangle, there is no guarantee for the order of loads/stores from different work-items (possible in different work-groups) on the voxel-values.
As such the question is can I expect this simple approach to "just work" or does there exist a possible race condition in which a voxel already marked as occupied may end up getting cleared out?
Let's assume your voxel-values are 0-initialised, and an intersection is indicated by writing some non-zero value, and otherwise no write happens (i.e. never store a 0-value). Then all the writes on the voxel-value would change it to a non-zero value, and the order does not matter, as you are only interested in observing a non-zero value (not trying to count intersections or something).
TL;DR: It should work.
QUESTION
I need to visualize a voxelization using gnuplot.
Each voxel is of the form:
x y z [active]
where [active] can be either a 1 or a 0.
The x,y,z is the relative position in a grid, so they're just integers.
An example might be:
...ANSWER
Answered 2019-Nov-15 at 20:263D points colored by value in column 4 using a smooth color palette:
QUESTION
I have coded a voxelization based raytracer which is working as expected but is very slow.
Currently the raytracer code is as follows:
...ANSWER
Answered 2018-Jun-12 at 18:09It seems you test for intersection with the ray most of all voxels in each level of the octree. And sort them (by some distance) also in each level. I propose another approach.
If the ray intersects with the bounding box (level 0 of the octree) it makes it at two faces of the box. Or in a corner or an edge, these are "corner" cases.
Finding the 3D ray-plane intersection can be done like here. Finding if the intersection is inside the face (quad) can be done by testing if the point is inside of one of the two triangles of the face, like here.
Get the farthest intersection I0
from the camera. Also let r
be a unit vector of the ray in the direction I0
toward the camera.
Find the deepest voxel for I0
coordinates. This is the farthest voxel from the camera.
Now we want the exit-coordinates I0e
for the ray in that voxel, through another face. While you could do again the calculations for all 6 faces, if your voxels are X,Y,X aligned and you define the ray in the same coordinates system as the octree, then calculae simplify a lot.
Apply a little displacement (e.g. a 1/1000 of the smallest voxel size) to I0e
by the r
unit vector of the ray: I1 = I0e + r/1000
. Find the voxel to these I1
. This is the next voxel in the sorted list of voxel-ray intersections.
Repeat finding I1e
then I2
then I2e
then I3
etc. until the bounding box is exited. The list of crossed voxels is sorted.
Working with the octree can be optimized depending on how you store its info: All possible nodes or just used. Nodes with data or just "pointers" to another container with the data. This is matter for another question.
QUESTION
NOTE: THIS QUESTION HAS BEEN DRASTICALLY EDITED FROM ITS ORIGINAL FORM
I am attempting to create a logarithmic raytracer by implementing an oct tree data structure combined with voxelization to achieve fast ray tracing.
Currently I am having issues with the ray collision detection.
The expected output should be the voxelized stanford dragon with its normal map.
Currrently the issue is that some regions are transparent:
The full dragon:
Transparent regions:
From these images it should be clear that the geometry is correct, but the collision checks are wrong.
There are 2 fragment shaders involved in this process:
The voxelizer fragment shader: ...ANSWER
Answered 2018-Jun-09 at 17:33The error comes from the sorting function as someone in the comments mentioned although not for the same reasons.
What has happened is that, I thought the sort function would modify the arrays passed to it, but it seems to be copying the data, so it does not return anything.
In other words:
QUESTION
I have a large set of data points in 3 column vectors. There are 10 million points with x,y,z coordinates.
I am voxelizing these points (assigning them to a discrete grid based upon occupancy). There are two ways to accomplish voxelization. The first way being a simple binning procedure where if the point falls within a certain bin that bin's intensity increases by 1. The other way is to assign a point to multiple bins and increase intensity based on distance from the bin centers. I wish to accomplish the second method of voxelization.
A simple 2d example of this is: Say you have point x,y=1.7,2.2 And have an evenly spaced grid with distance .5 between nodes in x and y.
Using method 1: The point would get binned to x,y=1.5,2 with intensity=1
Using method 2: The point would get distributed to (x,y),(x-.5,y),(x+.5,y),(x,y-.5),(x,y+.5) With intensities=(distTOpoint1/sumDistances),(distTopoint2/sumDistances),...,(distTopoint5/sumDistances)
...ANSWER
Answered 2017-Dec-30 at 14:44It is possible to remove for loop and use numpy operations to take a care of it. Same code as yours without for loop and indexing is ~60x faster:
QUESTION
I'm currently looking into an algorithm described within this research paper, however I've come across a portion which I'm unclear of how it's been achieved.
A grid is defined by placing a camera above the scene and adjusting its view frustum to enclose the area to be voxelized. This camera has an associated viewport with (w, h ) dimensions. The scene is then rendered, constructing the voxelization in the frame buffer. A pixel (i,j) represents a column in the grid and each voxel within this column is binary encoded using the k th bit of the RGBA value of the pixel. Therefore, the corresponding image represents a w×h×32 grid with one bit of information per voxel. This bit indicates whether a primitive passes through a cell or not. The union of voxels corresponding to the kth bit for all pixels defines a slice. Consequently, the image/texture encoding the grid is called a slicemap . When a primitive is rasterized, a set of fragments are obtained. A fragment shader is used in order to determine the position of the fragment in the column based on its depth. The result is then OR−ed with the current value of the frame buffer.
Presumably one would achieve this by setting the blend equation to use a binary-OR, however that's not an available option and I can't see a way to achieve it through manipulation of glBlendFunc()
+glBlendEquation()
Additionally from my understanding it's not possible to read the framebuffer within the fragment shader. You can bind a texture to both the shader and framebuffer, however accessing this within the shader is undefined behaviour due to a lack of synchronisation.
The paper doesn't state whether OpenGL or Direct-X was used, however to the best of my understanding it has the same glBlendEquation()
limitations.
Am I missing something?
I realise I could simply achieve the same result in 32 passes.
...ANSWER
Answered 2017-May-24 at 11:25OpenGL has a seperate glLogicOp()
for performing logical operations on the frame buffer.
This can be configured and enabled using
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install voxelization
You can use voxelization like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page