Kinect | Kinect Application Framework including some demo apps
kandi X-RAY | Kinect Summary
kandi X-RAY | Kinect Summary
#Atos Origin Kinect This Kinect application framework and demo apps are made by Marco Franssen and Jan Saris. During competence development at their company Atos Origin they started on februari 2011 by creating some apps and a framework with Microsoft Kinect. They started with a implementation in c#/WPF using the PrimeSense drivers and OpenNI SDK. #Goal Our goal is to create a framework for developers on top of Kinect. This Framework should support easy access to gestures etc. All the demo apps we created to test our framework are included. We would love it when you guys help to extend and improve the framework. Don't forget to add your own demo apps so you can show of all the cool stuff you made to the other developers participating.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of Kinect
Kinect Key Features
Kinect Examples and Code Snippets
Community Discussions
Trending Discussions on Kinect
QUESTION
I am workin on a project that aims to interpret some parts of the finger alphabet. I am using a Kinect v1 to this end and these two projects: Lightbuzz.Vitruvius provides the main funcionality and Lightbuzz.Vitruvius.Fingertracking the ability to detect finger-tips (at least so far is the theory).
Plugging these two together was more tedious than challenging but the real challenge comes from an EventHandler. You see there is a HandController class where the fingers are detected and displayed. In this class is an EventHandler that kicks off the detection.
Having appropately changed every thing Rider now reports this error (I am using the .NET Framework 4.0):
ANSWER
Answered 2022-Mar-29 at 13:02As the error suggests, your HandCollection
class cannot be used as a parameter in the eventhandler delegate. You might have your reasons for using .Net 4.0, so try making the class inherit from System.EventArgs
and see if this works. Otherwise I would suggest switching framework.
QUESTION
I'm building an application using the webcam to control video games (kinda like a kinect). It uses the webcam (cv2.VideoCapture(0)), AI pose estimation (mediapipe), and custom logic to pipe inputs into dolphin emulator.
The issue is the latency. I've used my phone's hi-speed camera to record myself snapping and found latency of around 32 frames ~133ms between my hand and the frame onscreen. This is before any additional code, just a loop with video read
and cv2.imshow
(about 15ms)
Is there any way to decrease this latency?
I'm already grabbing the frame in a separate Thread, setting CAP_PROP_BUFFERSIZE to 0, and lowering the CAP_PROP_FRAME_HEIGHT and CAP_PROP_FRAME_WIDTH, but I still get ~133ms of latency. Is there anything else I can be doing?
Here's my code below:
...ANSWER
Answered 2022-Jan-06 at 13:53The experience you have described above is a bright example, how accumulated latencies could devastate any chances to keep a control-loop tight-enough, to indeed control something meaningfully stable, as in a MAN-to-MACHINE-INTERFACE system we wish to keep:
User's-motion | CAM-capture | IMG-processing | GUI-show | User's-visual-cortex-scene-capture | User's decision+action | loop
A real-world situation, where OpenCV profiling was shown, to "sense" how much time we spend in respective acquisition-storage-transformation-postprocessing-GUI pipeline actual phases ( zoom in as needed )
What latency-causing steps do we work with?Forgive, for a moment, a raw-sketch of where we accumulate each of the particular latency-related costs :
QUESTION
I have a sequences of pngs and corresponding depth files (aligned to the corresponding images) from an external camera.
RGB: 1.png 2.png 3.png etc 150.png
Depth: 1.txt 2.txt 3.txt etc 150.txt
I also have the intrinsics and corresponding camera information in another file called camera.txt.
My goal is to convert these images and depth files to an mkv file in order to utilize the pykinect's body tracker (https://github.com/ibaiGorordo/pyKinectAzure)
So far, I've been able to convert the images and and depth files into an open3D RGBD object. See: http://www.open3d.org/docs/release/python_api/open3d.geometry.RGBDImage.html
I would think we need to run it through the azure kinect reader (https://github.com/isl-org/Open3D/blob/0ec3e5b24551eaffa3c7708aae8630fde9b00e6c/examples/python/reconstruction_system/sensors/azure_kinect_recorder.py#L34), but this seems to open up the camera for additional input.
How can I save this rgbd images to an mkv file format to read in to the pykinect reader? I
...ANSWER
Answered 2021-Dec-16 at 07:58Have you tried:
- Converting the RGBD image into a numpy array :
http://www.open3d.org/docs/latest/tutorial/Basic/rgbd_image.html
- Then converting the numpy array to an mkv file like:
NumPy array of a video changes from the original after writing into the same video
QUESTION
In the current commit, ManipulationStation used DepthImageToPointCloud to project the point cloud from color image and depth image input. However, in the documentation, it stated that
Note that if a color image is provided, it must be in the same frame as the depth image.
In my understanding, both color and depth image input come from RgbdSensor which is created from the info from MakeD415CameraModel. Both C and D are two difference frames.
I think the resulting point cloud have a wrong coloring. I tested it on similar setup, but not the MakeD415CameraModel exactly. I currently solved this issue by forcing C and D to be the same frames in MakeD415CameraModel.
My question : Do drake have the method that map between depth image and color image from different frames, similar to kinect library? Since this is the simulation after all, maybe this is overkill?
P.S. I am trying to simulate the image from Azure Kinect; hence the question.
...ANSWER
Answered 2021-Oct-30 at 00:18The issue https://github.com/RobotLocomotion/drake/issues/12125 discusses this problem, and has either a work-around (in case your simulation could fudge, and have the depth and camera frames identical even though that's slightly different than the real sensor) or a pointer to a different kind of solution (the cv2.registerDepth algorithm). Eventually it would be nice to have registerDepth-like feature built in to drake, but as of today, it's not there yet.
QUESTION
Hello everyone for a university project I am working with the Kinect sensor to get a point cloud. to work with the Kinect I have installed the J4K library for processing, but when I run an example code I get the following message. How can I solve? thank you all.
...ANSWER
Answered 2021-Jun-03 at 00:17Which version of the Kinect are you using ?
Based on the error the assumption is you're planning to use Kinect v2 (for Xbox One with Windows USB adaptor). If that's the case you need to first install Kinect for Windows SDK 2.0. Make sure the Kinect drivers are properly installed and you can run the precompiled Kinect for Windows SDK 2.0 example applications.
What the error message isn't telling you is that ufdw_j4k2_64bit.dll is not loaded because it depends on Kinect20.dll (which it expects in C:\WINDOWS\System32\
where the SDK installer would place it).
If you're still having issues you can try installing Thomas Lengeling's KinectPV2 library (which you can easily do via Sketch > Import Library > Add Library > (search) Kinect v2 for Processing. It may not have the same features as the ufdw library, but the instructions are clear and you can definitely get a point cloud.
QUESTION
New to pykinect and kinect in general -- trying to simply get a count of bodies currently being tracked. No skeletal or joint data required. Just want to get a running count of bodies currently in frame. I am using a kinect-v2 and pykinect2.
Being more specific, I'm trying to track how many bodies are in frame and the time elapsed since that value changed. (0 people to 1 person, 1 person to 2, etc.) Due to the built examples for pykinect and the way that they loop, this has proven difficult however. The latest attempt (Now updated with the solved code):
...ANSWER
Answered 2021-May-13 at 07:28I found a useful snippet that does what you need within one of the examples provided in the PyKinect2 GitHub repo.
You need to get the body frame, and then count the number of tracked bodies:
QUESTION
I am trying to calculate distance from a person to the the Kinect sensor v2 in UWP c#.
In WPF, I was getting this by
...ANSWER
Answered 2021-Mar-01 at 22:22I used face detection to get face coordinates from colour image. Mapped those coordinates on IR and depth frame. This way, I found the x and y cords of the face from depth frame.
QUESTION
I am processing frames received from Kinect v2 (Color and IR) in UWP. The program runs on remote machine (XBOX One S). The main goal is to get frames and write them to the disk with 30 fps for Color and IR to later process them further.
I am using the following code to check the frame rate:
...ANSWER
Answered 2021-Feb-27 at 22:26XBOX One has maximum available memory of 1 GB for Apps and 5 for Games. https://docs.microsoft.com/en-us/windows/uwp/xbox-apps/system-resource-allocation
While in PC the fps is 30 (as the memory has no such restrictions).
This causes the frame rate to drop. However, the fps did improve when running it on release mode or published to MS Store.
QUESTION
I know this sounds stupid and I'm propably very late to the party but here's the thing I want to program an gesture recogniction application (in the likes of this Hand detection or this actual finger detection) for the Xbox 360 Kinect. SDK (version 1.8) is found, installed and works, preliminary research is done - I only forgot to look in which language to write the code. The link from the SDK to the documentation would be the first thing to do but is a dead end, unfortunately.
From the provided examples it seems either to be C++ or C# although some old posts also claim Java. My question is: Is there a documentation not tied to the SDK and which pitfall are there in regard to developing in this specific case under C++/C#/Java? A post from 2011 barely covers the beginning.
Addendum: On further looking I was prompted for the Samples site from the developer toolkit - which can be reached, yet all listed and linked examples are dead ends too.
Addendum: For reference I userd this instruction - ultimately proving futile.
Found an version of NiTE here
...ANSWER
Answered 2021-Jan-19 at 22:29I've provided this answer in the past.
Personally I've used the Xbox360 sensor with OpenNI the most (because it's cross platform). Also the NITE middleware on alongside OpenNI provides some basic hand detection and even gesture detection (swipes, circle gesture, "button" push, etc.).
While OpenNI is opensource, NITE isn't so you'd be limited to what they provide.
The links you've shared use OpenCV. You can install OpenNI and compile OpenCV from source with OpenNI support. Alternatively, you can manually wrap the OpenNI frame data into an OpenCV cv::Mat
and carry on with the OpenCV operations from there.
Here's a basic example that uses OpenNI to get the depth data and passes that to OpenCV:
QUESTION
I have a dataframe that consists of of video game titles on various platforms. it contains, among other values the name, critic's average score and user's average score. Many of them are missing scores for the user, critic and/or ESRB rating.
What i'd like to do is replace the missing rating, critic and user scores with those for the same game on a different platform (assuming they exist) i'm not quite sure how to approach this.(note - i don't want to drop the duplicate names, because they aren't truly duplicate rows)
here is a sample chunk of the dataframe (i've removed some unrelated columns to make it manageable):
...ANSWER
Answered 2021-Jan-14 at 02:26I'm pretty sure pandas.DataFrame.groupby
is what you need:
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install Kinect
Generate a ssh key
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page