deeppose | DeepPose implementation in Chainer | Machine Learning library

 by   mitmul Python Version: v0.0.1 License: GPL-2.0

kandi X-RAY | deeppose Summary

kandi X-RAY | deeppose Summary

deeppose is a Python library typically used in Artificial Intelligence, Machine Learning, Deep Learning applications. deeppose has no bugs, it has no vulnerabilities, it has a Strong Copyleft License and it has low support. However deeppose build file is not available. You can download it from GitHub.

NOTE: This is not official implementation. Original paper is DeepPose: Human Pose Estimation via Deep Neural Networks.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              deeppose has a low active ecosystem.
              It has 383 star(s) with 125 fork(s). There are 32 watchers for this library.
              OutlinedDot
              It had no major release in the last 12 months.
              There are 27 open issues and 12 have been closed. On average issues are closed in 177 days. There are 2 open pull requests and 0 closed requests.
              It has a neutral sentiment in the developer community.
              The latest version of deeppose is v0.0.1

            kandi-Quality Quality

              deeppose has 0 bugs and 0 code smells.

            kandi-Security Security

              deeppose has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              deeppose code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              deeppose is licensed under the GPL-2.0 License. This license is Strong Copyleft.
              Strong Copyleft licenses enforce sharing, and you can use them when creating open source projects.

            kandi-Reuse Reuse

              deeppose releases are available to install and integrate.
              deeppose has no build file. You will be need to create the build yourself to build the component from source.
              Installation instructions are not available. Examples and code snippets are available.
              deeppose saves you 602 person hours of effort in developing the same functionality from scratch.
              It has 1403 lines of code, 72 functions and 14 files.
              It has high code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi has reviewed deeppose and discovered the below as its top functions. This is intended to give you an instant insight into deeppose implemented functionality, and help decide if they suit your requirements.
            • Load images
            • Calculate the center of a mesh
            • Return a NumPy array of joint joints
            • Calculate bounding box size
            • Runs test_joints
            • Load data
            • Load the model
            • Apply transform
            • Crops the input image
            • Resize image
            • Apply contrast normalization
            • Wraps image cropping
            • Tiled image
            • Create a Tiled image
            • Split test and test sets
            • Write data to file
            • Save crop images and joint examples
            • Return a list of joint positions
            • Load a model from path
            • Create a logger
            • Get an optimizer
            • Draw the loss curve
            • Create a temporary result directory
            • Save joint data
            • Parse command line arguments
            Get all kandi verified functions for this library.

            deeppose Key Features

            No Key Features are available at this moment for deeppose.

            deeppose Examples and Code Snippets

            No Code Snippets are available at this moment for deeppose.

            Community Discussions

            QUESTION

            Facing "No gradients for any variable" Error while training a SIAMESE NETWORK
            Asked 2018-Nov-04 at 07:16

            I'm currently building a model on Tensorflow( ver:1.8 os:Ubuntu MATE16.04) platform. The model's purpose is to detect/match Keypoints of human body. While training, the error "No gradients for any variable" occurred, and I have difficulties to fix it.

            Background of the model: Its basic ideas came from these two papers:

            1. Deep Learning of Binary Hash Codes for fast Image Retrieval
            2. Learning Compact Binary Descriptors with Unsupervised Deep Neural Networks

            They showed it's possible to match images according to Hash codes generated from a convolutional network. The similarity of two pictures is determined by the Hamming distance between their corresponding hash codes.

            I think it's possible to develop a extremely light weight model to perform real-time human pose estimation on a video with "constant human subject" and "fixed background".

            Model Structure

            01.Data source:

            3 images from one video with the same human subject and similar background. Every human keypoints in each image are well labeled. 2 of the images will be used as the "hint sources" and the last image will be the target for keypoint detection/matching.

            02.Hints:

            23x23pixel ROIs will be cropped from the "hint source" images according to the location of human keypoints. The center of these ROIs are the keypoints.

            03.convolutional network "for Hints":

            A simple 3-layered structure. The first two layers are convolution by [2,2] stride with a 3x3 filter. The last layer is a 5x5 convolution on a 5x5 input with no padding(equals to a fully connected layer)

            This will turn a 23x23pixel Hint ROI into one 32 bit Hash codes. One hint souce image will generate a set of 16 Hash codes.

            04.Convolutional network "for target image": The network share the smae weights with the hint network. But in this case, each convolution layer have paddings. The 301x301pixel image will be turned into a 76x76 "Hash map"

            05.Hash matching:

            I made a function called " locateMin_and_get_loss " to calculate the Hamming distance between "hint hash" and the hash codes on each point of the hash map. This function will create a "distance map". he location of the point with lowest distance value will be treated as the location of the keypoint.

            06.Loss calculation:

            I made a function "get_total_loss_and_result" to calculate the total loss of 16 keypoints. The loss are normalized euclidean distance between ground truth label points and the points located by the model.

            07.proposed work flow:

            Before initializing this model, the user will take two pictures of the target human subject from different angles. The pictures will be labeled by the state of art models like OpenPose or DeepPose and generate Hint Hashs from them with convolution network mentioned in 03.

            Finally the video stream will be started and processd by the model.

            08.Why "Two" sets of hints?

            One human joint/keypoint observed from different angles will have very diferent appearance. Instead of increasing dimetionality of the neural networ, I want to "cheat the game" by gathering two hints instead of one. I want to know whether it can increase the precision and generalizational capacity of the model or not.

            The problems I faced:

            01.The "No gradients for any variable " error (My main question of this post):

            As mentioned above, I'm facing this error while training the model. I tried to learn from posts like this and this and this. But currently I have no clue even though I checked the computational graph.

            02.The "Batch" problem:

            Due to its unique structure, it's hard to use conventional placeholder to contain the input data of multiple batch. I fixed it by setting the batch number to 3 and manually combine the value of loss functions.

            2018.10.28 Edit:

            The simplified version with only one hint set:

            ...

            ANSWER

            Answered 2018-Nov-04 at 07:16

            I used "eager execution" described in https://www.tensorflow.org/guide/eager to check the gradient.

            In the end I found "tf.round" and "tf.nn.relu6" will erase or set the gradient to zero.

            I made some modification to the code and now I can enter the training phase:

            Source https://stackoverflow.com/questions/52961507

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install deeppose

            You can download it from GitHub.
            You can use deeppose like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/mitmul/deeppose.git

          • CLI

            gh repo clone mitmul/deeppose

          • sshUrl

            git@github.com:mitmul/deeppose.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link