OpenPCDet | OpenPCDet Toolbox for LiDAR-based 3D Object Detection | Computer Vision library

 by   open-mmlab Python Version: v0.5.2 License: Apache-2.0

kandi X-RAY | OpenPCDet Summary

kandi X-RAY | OpenPCDet Summary

OpenPCDet is a Python library typically used in Artificial Intelligence, Computer Vision, Deep Learning, Pytorch applications. OpenPCDet has no bugs, it has no vulnerabilities, it has build file available, it has a Permissive License and it has medium support. You can download it from GitHub.

OpenPCDet Toolbox for LiDAR-based 3D Object Detection.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              OpenPCDet has a medium active ecosystem.
              It has 3642 star(s) with 1115 fork(s). There are 73 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              There are 32 open issues and 1146 have been closed. On average issues are closed in 58 days. There are 11 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of OpenPCDet is v0.5.2

            kandi-Quality Quality

              OpenPCDet has 0 bugs and 0 code smells.

            kandi-Security Security

              OpenPCDet has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              OpenPCDet code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              OpenPCDet is licensed under the Apache-2.0 License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              OpenPCDet releases are not available. You will need to build from source code and install.
              Build file is available. You can build the component from source.
              Installation instructions, examples and code snippets are available.
              OpenPCDet saves you 4508 person hours of effort in developing the same functionality from scratch.
              It has 13909 lines of code, 787 functions and 142 files.
              It has high code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi has reviewed OpenPCDet and discovered the below as its top functions. This is intended to give you an instant insight into OpenPCDet implemented functionality, and help decide if they suit your requirements.
            • Forward computation
            • Generate predicted boxes
            • Generate trajectory
            • Crop the coordinates of the current frame
            • Perform the forward computation
            • Generate a trajectory
            • Crop all previous frames from src
            • Crop the image points from the current frame
            • Evaluate the CoCoE evaluation result
            • Create Lyft dataset
            • Assigns the given anchors to the graph
            • Forward convolutional layer
            • Decode a heatmap from a heatmap
            • Evaluate the results
            • Forward convolution
            • Train a model
            • Forward the forward computation
            • Process a single sequence file
            • Compute final evaluation result
            • Evaluate one epoch
            • Generate prediction data
            • Calculates the loss layer loss
            • Get kitti image info
            • Create a GT database for a single scene
            • Assigns the bounding boxes to all the anchors
            • Concatenate a batch of voxels
            Get all kandi verified functions for this library.

            OpenPCDet Key Features

            No Key Features are available at this moment for OpenPCDet.

            OpenPCDet Examples and Code Snippets

            PointPainting,How to Use,Dataset Preparation
            Pythondot img1Lines of Code : 14dot img1License : Permissive (MIT)
            copy iconCopy
            detector
            ├── data
            │   ├── kitti
            │   │   │── ImageSets
            │   │   │── training
            │   │   │   ├── calib
            │   │   │   ├── image_2
            │   │   │   ├── image_3
            │   │   │   ├── label_2
            │   │   │   ├── velodyne
            │   │   │   ├── planes
            │   │   │   ├── painted_lidar (ke  
            1. Dataloader,1.2 Align the coordinates of your own dataset with openpcdet
            Pythondot img2Lines of Code : 11dot img2License : Permissive (MIT)
            copy iconCopy
                    # My dataset coordinates are:
                    # - x pointing to the right
                    # - y pointing to the front
                    # - z pointing up
                    # Openpcdet Normative coordinates are:
                    # - x pointing foreward
                    # - y pointings to the left  
            LPCG,Getting Started,Training & Testing
            Pythondot img3Lines of Code : 4dot img3no licencesLicense : No License
            copy iconCopy
            cp high_acc/infer_kitti.py  high_acc/OpenPCDet/tools/
            cd high_acc/OpenPCDet/tools
            CUDA_VISIBLE_DEVICES=0 python infer_kitti.py --cfg_file cfgs/kitti_models/pv_rcnn.yaml --ckpt ../pv_rcnn_8369.pth --data_path /pvc_user/pengliang/LPCG/data/kitti/kitti_  

            Community Discussions

            QUESTION

            Vscode: can't go to definition due to large workspace (python)
            Asked 2021-May-14 at 02:26

            as my project grows bigger, I can't use ctrl + left click(or F12) to go to definition in Vscode.

            I have tested with a new workspace and a single python file. The go to definition feature works well. (I.e. Python and Pylance are functional for small project.)

            ...

            ANSWER

            Answered 2021-May-14 at 02:26

            I suffered from the same problem on server, which has much larger capacity than my local machine.The searching process for go to definition took about 3 seconds before it find its destination.

            It occurred after I generate a lot of image inside the workspace. (Even though I added the new files/folders to .gitignore list.)

            I would avoid adding massive new files in the workspace to keep go to definition function alive.

            Source https://stackoverflow.com/questions/67415909

            QUESTION

            Align feature map with ego motion (problem of zooming ratio )
            Asked 2021-Apr-12 at 12:17

            I want to align the feature map using ego motion, as mentioned in the paper An LSTM Approach to Temporal 3D Object Detection in LiDAR Point Clouds

            I use VoxelNet as backbone, which will shrink the image for 8 times. The size of my voxel is 0.1m x 0.1m x 0.2m(height)

            So given an input bird-eye-view image size of 1408 x 1024,

            the extracted feature map size would be 176 x 128, shrunk for 8 times.

            The ego translation of the car between the "images"(point clouds actually) is 1 meter in both x and y direction. Am I right to adjust the feature map for 1.25 pixels?

            ...

            ANSWER

            Answered 2021-Apr-12 at 12:17

            It's caused by the function torch.nn.functional.affine_grid I used.

            I didn't fully understand this function before I use it.

            These vivid images would be very helpful on showing what this function actually do(with comparison to the affine transformations in Numpy.

            Source https://stackoverflow.com/questions/66983586

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install OpenPCDet

            Please refer to INSTALL.md for the installation of OpenPCDet.
            Please refer to GETTING_STARTED.md to learn more usage about this project.

            Support

            [x] Support both one-stage and two-stage 3D object detection frameworks[x] Support distributed training & testing with multiple GPUs and multiple machines[x] Support multiple heads on different scales to detect different classes[x] Support stacked version set abstraction to encode various number of points in different scenes[x] Support Adaptive Training Sample Selection (ATSS) for target assignment[x] Support RoI-aware point cloud pooling & RoI-grid point cloud pooling[x] Support GPU version 3D IoU calculation and rotated NMS
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/open-mmlab/OpenPCDet.git

          • CLI

            gh repo clone open-mmlab/OpenPCDet

          • sshUrl

            git@github.com:open-mmlab/OpenPCDet.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link