GestureAI-CoreML-iOS | Hand-gesture recognition on iOS app using CoreML | Machine Learning library

 by   akimach Swift Version: Current License: MIT

kandi X-RAY | GestureAI-CoreML-iOS Summary

kandi X-RAY | GestureAI-CoreML-iOS Summary

GestureAI-CoreML-iOS is a Swift library typically used in Artificial Intelligence, Machine Learning, Deep Learning, Tensorflow, Keras applications. GestureAI-CoreML-iOS has no bugs, it has no vulnerabilities, it has a Permissive License and it has low support. You can download it from GitHub.

Hand-gesture recognition on iOS app using CoreML
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              GestureAI-CoreML-iOS has a low active ecosystem.
              It has 145 star(s) with 17 fork(s). There are 6 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              GestureAI-CoreML-iOS has no issues reported. There are no pull requests.
              It has a neutral sentiment in the developer community.
              The latest version of GestureAI-CoreML-iOS is current.

            kandi-Quality Quality

              GestureAI-CoreML-iOS has no bugs reported.

            kandi-Security Security

              GestureAI-CoreML-iOS has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.

            kandi-License License

              GestureAI-CoreML-iOS is licensed under the MIT License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              GestureAI-CoreML-iOS releases are not available. You will need to build from source code and install.
              Installation instructions are available. Examples and code snippets are not available.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of GestureAI-CoreML-iOS
            Get all kandi verified functions for this library.

            GestureAI-CoreML-iOS Key Features

            No Key Features are available at this moment for GestureAI-CoreML-iOS.

            GestureAI-CoreML-iOS Examples and Code Snippets

            No Code Snippets are available at this moment for GestureAI-CoreML-iOS.

            Community Discussions

            Trending Discussions on GestureAI-CoreML-iOS

            QUESTION

            Creating a CoreML LRCN model
            Asked 2018-Jan-29 at 10:32

            Hello and thank you in advance or any help or guidance provided!

            The question I have stems from an article posted on Apple's CoreML documentation site. The topic of this article was also covered during the WWDC 2017 lectures and I found it quite interesting. I posted a question recently that was related to part of this same project I'm working on and it was solved with ease; however, as I get further into this endeavor, I find myself not understanding how part of this model is being implemented.

            To start off, I have a model I'm building in Keras with a Tensorflow backend that uses convolutional layers in the time distributed wrapper. Following the convolutional section, a single LSTM layer connects to a dense layer as the output. The goal is to create a many to many structure that classifies each item in a padded sequence of images. I'll post the code for the model below.

            My plan to train and deploy this network may raise other questions down the road, but I will make separate a post if they cause trouble. It relates to training with the time distributed wrapper, then striping it off the model and loading the weights for the wrapped layers at CoreML conversion time as the time distributed wrapper doesn't play well with CoreML.

            My question is this:

            In the aforementioned article (and in a CormeML example project I found on GitHub), the implementation is quite clever. Since CoreML (or at least the stock converter) doesn't support image sequences as inputs, the images are fed one at a time, and the LSTM states are passed out of the network as an output along with the prediction for the input image. For the next image in the sequence, the user passes the image, along with the previous time step's LSTM state so the model can "pick up where it left off" so to speak and handle the single inputs as a sequence. It sort of forms a loop for the LSTM state (this is covered in further detail in the Apple article). Now, for the actual question part...

            How is this implemented in a library like Keras? So far I have been successful at outputting the LSTM state using the functional API and the "return_state" setting on the LSTM layer, and routing that to a secondary output. Pretty simple. Not so simple (at least for me), is how to pass that state back INTO the network for the next prediction. I've looked over the source code and documentation for the LSTM layer and I don't see anything that jumps out as an input for the state. The only thing I can think of, is to possibly make the LSTM layer its own model and use the "initial_state" to set it, but based upon a post on the Keras GitHub I found, it seems like the model then needs a custom call function and I'm not sure how to work that into CoreML. Just FYI, I am planning to loop both the hidden and cell states in and out of the model, unless that isn't necessary and only the hidden states should be used as is shown in Apple's model.

            Thanks once again. Any help provided is always appreciated!

            My current model looks like this:

            ...

            ANSWER

            Answered 2018-Jan-29 at 10:32

            It turns out the coremltools converter will automatically add the state inputs and outputs during conversion.

            Keras converter _topology.py, line 215 for reference.

            Source https://stackoverflow.com/questions/48492613

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install GestureAI-CoreML-iOS

            Clone this repository.
            Download GestureAI.mlmodel (Trained RNN model) from here.
            Open GestureAI.xcodeproj
            Drag and drop GestureAI.mlmodel to Xcode.
            Add GestureAI.mlmodel into Compile Sources in Build Phases.
            Build and run.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/akimach/GestureAI-CoreML-iOS.git

          • CLI

            gh repo clone akimach/GestureAI-CoreML-iOS

          • sshUrl

            git@github.com:akimach/GestureAI-CoreML-iOS.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link