ElasticFusion | Real-time dense visual SLAM system | Robotics library
kandi X-RAY | ElasticFusion Summary
kandi X-RAY | ElasticFusion Summary
Real-time dense visual SLAM system capable of capturing comprehensive dense globally consistent surfel-based maps of room scale environments explored using an RGB-D camera.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of ElasticFusion
ElasticFusion Key Features
ElasticFusion Examples and Code Snippets
Community Discussions
Trending Discussions on ElasticFusion
QUESTION
I'm trying to simulate lens distortion effect for my SLAM project. A scanned color 3D point cloud is already given and loaded in OpenGL. What I'm trying to do is render 2D scene at a given pose and do some visual odometry between the real image from a fisheye camera and the rendered image. As the camera has severe lens distortion, it should be considered in the rendering stage too.
The problem is that I have no idea where to put the lens distortion. Shaders?
I've found some open codes that put the distortion in the geometry shader. But this one I guess the distortion model is different from the lens distortion model in Computer Vision community. In CV community, lens distortion usually occurs on the projected plane.
This one is quite similar to my work but they didn't used distortion model.
Anyone have a good idea?
I just found another implementation. Their code implemented the distortion in both of fragment shader and geometry shader. But fragment shader version can be applied in my situation. Thus, I guess the following will work:
...ANSWER
Answered 2017-Jun-12 at 06:48Lens distortion usually turns straight lines into curves. When rasterizing lines and triangles using OpenGL, the primitives' edges however stay straight, no matter how you transform the vertices.
If your models have fine enough tesselation, then incorporating the distortion into the vertex transformation is viable. It also works if you're rendering only points.
However when your aim is general applicability you have to somehow deal with the straight edged primitives. One way is by using a geometry shader to further subdivide incoming models; or you can use a tesselation shader.
Another method is rendering into a cubemap and then use a shader to create a lens equivalent for that. I'd actually recommend that for generating fisheye images.
The distortion itself is usually represented by a polynomial of order 3 to 5, mapping undistorted angular distance from the optical center axis to the distorted angular distance.
QUESTION
I downloaded Freiburg desk dataset from TUM RGB-D SLAM Dataset and Benchmark and converted it to '.klg' which is custom format of slam algorithm . I loaded this klg file to ElasticFusion and runned the SLAM algorithm. The 3d reconstruction output seems good enough while doing it.
Now i want to build 3d reconstruction by already built trajectory information. I retrieved trajectory data from previous run from '.freibrug' and converted it to desired format by ElasticFusion. I just changed timestamp from seconds to microsenconds by multiplying it to 1000000. And split the variables using "," instead of " " space . I run the algorithm this time with "-p" flag and path information to trajectory file. Below is my running command.
...ANSWER
Answered 2017-Aug-03 at 14:04I took 3 scans: left-to-right, down-to-up and back-to-front. I observed that thought trajectory file seems correct , the building is going wrong. When I move the camera on x axis , on EF it moves in z axis and similar situation for the others. I tried to found transformation matrix manually. I applied this transformation to translation and rotation. It started to work afterwards.
QUESTION
I am trying to run SLAM algorithm (ElasticFusion) using my custom .klg file. I tried the following 2 ways :
The first way was about to build .klg file manually from separate depth and rgb image (.png) files and their time stamp informations . I tried conversion script on this Sequence 'freiburg1_desk' dataset and then run ElasticFusion . I get good result and point cloud. But when I tried to record environment on my own device with following the same steps, I did not get desired result or point cloud. The result which i am getting in live logging is much better . I guess it is because of the code that i am using for depth image visualization.
...ANSWER
Answered 2017-Jun-20 at 14:06Solved
Second way worked after re-installing OpenNI. Probably in previous runs Logger somehow was not able to find OpenNI for streaming the depth and rgb.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install ElasticFusion
If you use Prime, follow instructions here
If you use Bumblebee, remember to run as optirun ./ElasticFusion
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page