gesture-recognition | : hand : Recognizing `` Hand Gestures '' using OpenCV and Python | Machine Learning library
kandi X-RAY | gesture-recognition Summary
kandi X-RAY | gesture-recognition Summary
Recognizing "Hand Gestures" using OpenCV and Python.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Count the number of contours in the image .
- Compute the region of the foreground image .
- Compute the weighted average of an image .
gesture-recognition Key Features
gesture-recognition Examples and Code Snippets
Community Discussions
Trending Discussions on gesture-recognition
QUESTION
I am trying to write my own neural network to detect certain hand gesture, following the code found from https://www.kaggle.com/benenharrington/hand-gesture-recognition-database-with-cnn/execution.
...ANSWER
Answered 2021-May-02 at 05:38The problem here is the output labels, you didn't specify what data you used but its due to the number of labels for the output
Its a simple fix if you change from 10 to 4
QUESTION
I know this sounds stupid and I'm propably very late to the party but here's the thing I want to program an gesture recogniction application (in the likes of this Hand detection or this actual finger detection) for the Xbox 360 Kinect. SDK (version 1.8) is found, installed and works, preliminary research is done - I only forgot to look in which language to write the code. The link from the SDK to the documentation would be the first thing to do but is a dead end, unfortunately.
From the provided examples it seems either to be C++ or C# although some old posts also claim Java. My question is: Is there a documentation not tied to the SDK and which pitfall are there in regard to developing in this specific case under C++/C#/Java? A post from 2011 barely covers the beginning.
Addendum: On further looking I was prompted for the Samples site from the developer toolkit - which can be reached, yet all listed and linked examples are dead ends too.
Addendum: For reference I userd this instruction - ultimately proving futile.
Found an version of NiTE here
...ANSWER
Answered 2021-Jan-19 at 22:29I've provided this answer in the past.
Personally I've used the Xbox360 sensor with OpenNI the most (because it's cross platform). Also the NITE middleware on alongside OpenNI provides some basic hand detection and even gesture detection (swipes, circle gesture, "button" push, etc.).
While OpenNI is opensource, NITE isn't so you'd be limited to what they provide.
The links you've shared use OpenCV. You can install OpenNI and compile OpenCV from source with OpenNI support. Alternatively, you can manually wrap the OpenNI frame data into an OpenCV cv::Mat
and carry on with the OpenCV operations from there.
Here's a basic example that uses OpenNI to get the depth data and passes that to OpenCV:
QUESTION
I am working on a dataset that is a collection of 5 Hand made letters. I've uploaded the DB on Kaggle and if anyone wants to give it a look, please do.
https://www.kaggle.com/shayanriyaz/gesture-recognition
Currently, I've trained and tested several models but I keep getting 100% accuracy.
Here's my code.
...ANSWER
Answered 2020-Feb-25 at 20:57There is nothing wrong with your model, it's just a trivial problem for the models to solve. These letters look nothing alike when you consider all of the features you have. If you had chosen all of the letters, or ones that all looked the same, you might see some error.
Rerun the model using only index_pitch and index_roll. You will still get like 95% AUC. At least by doing that you can guess that the only loss comes from B,D, and K, which by looking at an image of what those look like are the only 3 that could remotely be confused if you only looked at the index finger. This turns out to be the case.
It's just a problem that given your data set is actually solvable
QUESTION
I'm working with OpenCV on hand detection. But I'm struggling when trying to contours of the threshed image. findContour
will always try to find white area as contour.
So basically it works in most cases but sometimes my threshed image looks like this :
_, threshed = cv2.threshold(gray, 127, 255, cv2.THRESH_BINARY|cv2.THRESH_OTSU)
So to make it works I just need to change the threshold type cv2.THRESH_BINARY_INV
.
_, threshed = cv2.threshold(gray, 127, 255, cv2.THRESH_BINARY_INV|cv2.THRESH_OTSU)
And it works well.
My question is how can I determine when the threshold need to be reversed ? Does I Need to always found contours on both threshed images, and compare the result (I this case How ?) ? or there is a way to allays knows if contours are not totally missed.
EDIT : There is a way to be 100% sure contour looks like a hand ?
EDIT 2 : So I forgot to mention that I'm trying to detect fingertips and defects using this method so I need defects, which with the first threshed image I can't find them, because it reversed. See blue point on the First contour image.
Thanks.
...ANSWER
Answered 2019-Jul-10 at 12:37You can write a utility method to detect the most dominant color along the border and then decide the logic, as if you want to invert the image or not, so the flow may look like:
- Use OSTU binarization method.
- Pass the thresholded image to utility method
get_most_dominant_border_color
and get the dominant color. - If the border color is
WHITE
, then you should invert the image usingcv2.bitwise_not
, else keep it that way only.
The get_most_dominant_border_color
could be defined as:
QUESTION
I'm trying to get the following script to work. I found it from this website but I keep getting the error
:
contours, hierarchy = cv2.findContours(thresh1,cv2.RETR_TREE,cv2.CHAIN_APPROX_SIMPLE) ValueError: too many values to unpack (expected 2)
I have little experience in Python and OpenCV so if anyone here can help that would be much appreciated! I'm running Mac OS X 10.13.4, OpenCV 3.4.1, Python 3.6.5 and
Here is the script I'm trying to get work :
...ANSWER
Answered 2018-Dec-07 at 03:02The function cv2.fincContours has changed since OpenCV 3.x , and back in OpenCV 4.x . Try this:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install gesture-recognition
You can use gesture-recognition like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page