keras-video-classifier | Keras implementation of video classifier | Machine Learning library
kandi X-RAY | keras-video-classifier Summary
kandi X-RAY | keras-video-classifier Summary
Keras implementation of video classifier
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Fit the VGG16 model
- Compile the model
- Returns the path to the architecture
- Returns the path to the config file
- Fit the convolutional model
- Scans and generates videos for conv2d
- Create a CNN model
- Extract videos from Conv2D
- Load UCF
- Download the ucf file from internet
- Plot a history plot and save it
- Creates a plot showing the accuracy and validation loss
- Scans the input directory and extracts features
- Extract features from video
- Scans the input directory and extracts images
- Extract images from a video
- Load the model
- Scan the UCF with the given labels
- Predict class label
- Predict the class of the given video
- Loads the model
- Scan for conv2d
keras-video-classifier Key Features
keras-video-classifier Examples and Code Snippets
Community Discussions
Trending Discussions on keras-video-classifier
QUESTION
TensorFlow.js version used
- tensorflow 1.12.0
- tensorflow-base 1.12.0
- tensorflow-gpu 1.12.0
- tensorflow-hub 0.2.0
- tensorflowjs 0.8.0
Browser version used
- Firefox 65.0 (64-it) on Windows 10
- Microsoft Edge 42.17134.1.0 on Windows 10
Problem Description
I have created & trained a Keras based LSTM bidirectional model in Python to classify video. This model works awesome and classifies the videos with 90+ accuracy. But when I converted this model to tensoflorjs model using the tensorflorjs_converter tool and used the same on browser, the model always throws the same output (top 3 results) for any video input - BasketballDunk; prob. 0.860, BalanceBeam; prob. 0.088, BodyWeightSquats; prob. 0.024
I have checked all the inputs, their shape, etc. that are given to the LSTM bidirectional model and can't find any issues. But still the inference from LSTM bidirectional model is always the same irrespective of the video input. I have ensured that every individual video frame sent to LSTM model as a sequence is correct. (used MobileNet model to recognize each frame and it does correctly and hence concluding that frames sent to LSTM are perfect) Please help me identifying the issue & fix. All the required details are below.
(entire model is based on the examples given in this github repository by Xianshun Chen (chen0040) ->[https://github.com/chen0040/keras-video-classifier])
Details of the model:
- uses MobileNet model to extract features
- uses LSTM bidirectional model to take-in extracted features and classify the video as one of 20 classes
Dataset used:
- UCF101 - Action Recognition Data Set (http://crcv.ucf.edu/data/UCF101.php)
Tensorflowjs converted model:
- tensorflowjs converted model, sample videos and html file to test are all in this drive location as zip file: [https://drive.google.com/open?id=1k_4xOPlTdbUJCBPFyT9zmdB3W5lYfuw0]
- to test the model, just unzip and build using 'yarn' and run using 'yearn watch'
- index.html has the instructions to test
NOTE: I have tried LSTM model (unidirectional) and same issue is with that converted model as well. Only difference is that it produces 'Billards' as the top prediction with probability over 0.95.
Code to reproduce the issue: Code & test artifacts are in a zip file at this Drive location - [https://drive.google.com/open?id=1k_4xOPlTdbUJCBPFyT9zmdB3W5lYfuw0]
...ANSWER
Answered 2019-Feb-21 at 13:19Found out the reasons for tfjs converted model not producing the correct inference...at last :)
Reasons:
List item Input to LSTM model had NaN in them! Though I was passing the extracted features from MobileNet model to LSTM, features .dataSync() was not used. Because of this, when I added the extracted features into a tf.buffer they were added as NaN. (when I printed values in log just before adding to tf.buffer, they printed values correctly!...this is strange). So, when I used dataSync() on the extracted features, they got added into tf.buffer correctly.
List item Use of tf.buffer() to store the extracted features (from MobileNet) and converting them to tensors before passing to LSTM model. Instead I used tf.stack() to store the extracted features and then passed the stacked tensor to LSTM model. (I understand that tf.stack() does the equivalent of np.array())
Hope these inputs help someone.
Regards, Jay
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install keras-video-classifier
You can use keras-video-classifier like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page