mmaction2 | OpenMMLab 's Next Generation Video Understanding Toolbox | Video Utils library
kandi X-RAY | mmaction2 Summary
kandi X-RAY | mmaction2 Summary
MMAction2 is an open-source toolbox for video understanding based on PyTorch. It is a part of the OpenMMLab project. The master branch works with PyTorch 1.3+.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Train a model
- Build a daloader
- Build an MLDistributedData object
- Build a dataset from a CFG
- Wrapper for inference_recognizer
- Parse command line arguments
- Adds a parser to subparsers
- Adds the parser for the time parser
- Wrapper for skeleton_stdetection
- Build a model
- Generates a list of trainframes
- Runs a gendata
- Parse the HMDB50 split of the HMDB51 split into a dict
- Calculate RGB STDetector
- Get output from a video
- Parse phonetic splits
- Visualize a video
- Parse requirements from a requirements file
- Build the RGB and Flow file list
- Add a single detected image
- Show action recognition results
- Evaluate the ground truth test
- Generate a list of POSIX processes
- Convert a list of lines to a dict
- Extracts a dense flow from a video
- Forward computation
mmaction2 Key Features
mmaction2 Examples and Code Snippets
import ffmpeg
import numpy as np
import imageio as iio
(
probe = ffmpeg.probe(video_file)
stream_dict = probe['streams'][0]
width, height = stream_dict['width'], stream_dict['height']
out, _ = ffmpeg
.input('filename.webm')
conda create -n mmaction python=3.7 -y
conda activate mmaction
conda install pytorch=1.7.1 cudatoolkit=11.0 torchvision=0.8.2 -c pytorch
pip install mmcv-full==1.2.1 -f https://download.openmmlab.com/mmcv/dist/cu110/torch1.7.1/index.html
git clone
@misc{feichtenhofer2020x3d,
title={X3D: Expanding Architectures for Efficient Video Recognition},
author={Christoph Feichtenhofer},
year={2020},
eprint={2004.04730},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
Community Discussions
Trending Discussions on mmaction2
QUESTION
The following error(s) and solution go for deploying a stack through YAML in portainer but they can surely be applied to docker otherwise.
Environment:
...ANSWER
Answered 2021-Apr-13 at 05:55It seems that by default, the size of the shared memory is limited to 64mb. The solution to this error therefore, as shown in this issue is to increase the size of shared memory.
Hence, the first idea that comes to mind would be simply defining something like shm_size: 9gb
in the YAML file of the stack. However, this might not work as shown for e.g in this issue.
Therefore, in the end, I had to use the following workaround (also described here, but poorly documented):
QUESTION
I am referring to this repo to adapt mmaction2 grad-cam demo from short video offline inference to long video online inference. The script is shown below:
Note: to make this script can be easily reproduce, i comment out some codes that needs many dependencies.
...ANSWER
Answered 2021-Jan-04 at 09:50If you use run_in_executor
, target function should not be async
. You need to remove async
keyword before def inference()
.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install mmaction2
Please see getting_started.md for the basic usage of MMAction2. There are also tutorials:. A Colab tutorial is also provided. You may preview the notebook here or directly run on Colab.
learn about configs
finetuning models
adding new dataset
designing data pipeline
adding new modules
exporting model to onnx
customizing runtime settings
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page