Kinect | Kinect Application Framework including some demo apps

 by   atosorigin C# Version: Current License: No License

kandi X-RAY | Kinect Summary

kandi X-RAY | Kinect Summary

Kinect is a C# library. Kinect has no bugs, it has no vulnerabilities and it has low support. You can download it from GitHub.

#Atos Origin Kinect This Kinect application framework and demo apps are made by Marco Franssen and Jan Saris. During competence development at their company Atos Origin they started on februari 2011 by creating some apps and a framework with Microsoft Kinect. They started with a implementation in c#/WPF using the PrimeSense drivers and OpenNI SDK. #Goal Our goal is to create a framework for developers on top of Kinect. This Framework should support easy access to gestures etc. All the demo apps we created to test our framework are included. We would love it when you guys help to extend and improve the framework. Don't forget to add your own demo apps so you can show of all the cool stuff you made to the other developers participating.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              Kinect has a low active ecosystem.
              It has 18 star(s) with 11 fork(s). There are 31 watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              There are 1 open issues and 0 have been closed. There are no pull requests.
              It has a neutral sentiment in the developer community.
              The latest version of Kinect is current.

            kandi-Quality Quality

              Kinect has 0 bugs and 0 code smells.

            kandi-Security Security

              Kinect has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              Kinect code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              Kinect does not have a standard license declared.
              Check the repository for any license declaration and review the terms closely.
              OutlinedDot
              Without a license, all rights are reserved, and you cannot use the library in your applications.

            kandi-Reuse Reuse

              Kinect releases are not available. You will need to build from source code and install.
              Installation instructions are available. Examples and code snippets are not available.
              Kinect saves you 1117466 person hours of effort in developing the same functionality from scratch.
              It has 507399 lines of code, 0 functions and 2766 files.
              It has low code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of Kinect
            Get all kandi verified functions for this library.

            Kinect Key Features

            No Key Features are available at this moment for Kinect.

            Kinect Examples and Code Snippets

            No Code Snippets are available at this moment for Kinect.

            Community Discussions

            QUESTION

            Issue with EventHandler of a custom class
            Asked 2022-Mar-29 at 13:02

            I am workin on a project that aims to interpret some parts of the finger alphabet. I am using a Kinect v1 to this end and these two projects: Lightbuzz.Vitruvius provides the main funcionality and Lightbuzz.Vitruvius.Fingertracking the ability to detect finger-tips (at least so far is the theory).
            Plugging these two together was more tedious than challenging but the real challenge comes from an EventHandler. You see there is a HandController class where the fingers are detected and displayed. In this class is an EventHandler that kicks off the detection.
            Having appropately changed every thing Rider now reports this error (I am using the .NET Framework 4.0):

            ...

            ANSWER

            Answered 2022-Mar-29 at 13:02

            As the error suggests, your HandCollection class cannot be used as a parameter in the eventhandler delegate. You might have your reasons for using .Net 4.0, so try making the class inherit from System.EventArgs and see if this works. Otherwise I would suggest switching framework.

            Source https://stackoverflow.com/questions/71662048

            QUESTION

            Lower latency from webcam cv2.VideoCapture
            Asked 2022-Jan-06 at 13:53

            I'm building an application using the webcam to control video games (kinda like a kinect). It uses the webcam (cv2.VideoCapture(0)), AI pose estimation (mediapipe), and custom logic to pipe inputs into dolphin emulator.

            The issue is the latency. I've used my phone's hi-speed camera to record myself snapping and found latency of around 32 frames ~133ms between my hand and the frame onscreen. This is before any additional code, just a loop with video read and cv2.imshow (about 15ms)

            Is there any way to decrease this latency?

            I'm already grabbing the frame in a separate Thread, setting CAP_PROP_BUFFERSIZE to 0, and lowering the CAP_PROP_FRAME_HEIGHT and CAP_PROP_FRAME_WIDTH, but I still get ~133ms of latency. Is there anything else I can be doing?

            Here's my code below:

            ...

            ANSWER

            Answered 2022-Jan-06 at 13:53
            Welcome to the War-on-Latency ( shaving-off )

            The experience you have described above is a bright example, how accumulated latencies could devastate any chances to keep a control-loop tight-enough, to indeed control something meaningfully stable, as in a MAN-to-MACHINE-INTERFACE system we wish to keep:

            User's-motion | CAM-capture | IMG-processing | GUI-show | User's-visual-cortex-scene-capture | User's decision+action | loop

            A real-world situation, where OpenCV profiling was shown, to "sense" how much time we spend in respective acquisition-storage-transformation-postprocessing-GUI pipeline actual phases ( zoom in as needed )

            What latency-causing steps do we work with?

            Forgive, for a moment, a raw-sketch of where we accumulate each of the particular latency-related costs :

            Source https://stackoverflow.com/questions/70597020

            QUESTION

            Converting a sequences of RGB images and depth files to mkv file format
            Asked 2021-Dec-16 at 07:58

            I have a sequences of pngs and corresponding depth files (aligned to the corresponding images) from an external camera.

            RGB: 1.png 2.png 3.png etc 150.png

            Depth: 1.txt 2.txt 3.txt etc 150.txt

            I also have the intrinsics and corresponding camera information in another file called camera.txt.

            My goal is to convert these images and depth files to an mkv file in order to utilize the pykinect's body tracker (https://github.com/ibaiGorordo/pyKinectAzure)

            So far, I've been able to convert the images and and depth files into an open3D RGBD object. See: http://www.open3d.org/docs/release/python_api/open3d.geometry.RGBDImage.html

            I would think we need to run it through the azure kinect reader (https://github.com/isl-org/Open3D/blob/0ec3e5b24551eaffa3c7708aae8630fde9b00e6c/examples/python/reconstruction_system/sensors/azure_kinect_recorder.py#L34), but this seems to open up the camera for additional input.

            How can I save this rgbd images to an mkv file format to read in to the pykinect reader? I

            ...

            ANSWER

            Answered 2021-Dec-16 at 07:58

            QUESTION

            RGBD image to Pointcloud in ManipulationStation example
            Asked 2021-Oct-30 at 00:18

            In the current commit, ManipulationStation used DepthImageToPointCloud to project the point cloud from color image and depth image input. However, in the documentation, it stated that

            Note that if a color image is provided, it must be in the same frame as the depth image.

            In my understanding, both color and depth image input come from RgbdSensor which is created from the info from MakeD415CameraModel. Both C and D are two difference frames.

            I think the resulting point cloud have a wrong coloring. I tested it on similar setup, but not the MakeD415CameraModel exactly. I currently solved this issue by forcing C and D to be the same frames in MakeD415CameraModel.

            My question : Do drake have the method that map between depth image and color image from different frames, similar to kinect library? Since this is the simulation after all, maybe this is overkill?

            P.S. I am trying to simulate the image from Azure Kinect; hence the question.

            ...

            ANSWER

            Answered 2021-Oct-30 at 00:18

            The issue https://github.com/RobotLocomotion/drake/issues/12125 discusses this problem, and has either a work-around (in case your simulation could fudge, and have the depth and camera frames identical even though that's slightly different than the real sensor) or a pointer to a different kind of solution (the cv2.registerDepth algorithm). Eventually it would be nice to have registerDepth-like feature built in to drake, but as of today, it's not there yet.

            Source https://stackoverflow.com/questions/69768160

            QUESTION

            ufdw_j4k2_64bit.dll not loaded
            Asked 2021-Jun-03 at 00:17

            Hello everyone for a university project I am working with the Kinect sensor to get a point cloud. to work with the Kinect I have installed the J4K library for processing, but when I run an example code I get the following message. How can I solve? thank you all.

            ...

            ANSWER

            Answered 2021-Jun-03 at 00:17

            Which version of the Kinect are you using ?

            1. Kinect v1 (for xbox 360 or Windows up to 1.8)
            2. Kinect V2 (for xbox one)
            3. Azure Kinect

            Based on the error the assumption is you're planning to use Kinect v2 (for Xbox One with Windows USB adaptor). If that's the case you need to first install Kinect for Windows SDK 2.0. Make sure the Kinect drivers are properly installed and you can run the precompiled Kinect for Windows SDK 2.0 example applications.

            What the error message isn't telling you is that ufdw_j4k2_64bit.dll is not loaded because it depends on Kinect20.dll (which it expects in C:\WINDOWS\System32\ where the SDK installer would place it).

            If you're still having issues you can try installing Thomas Lengeling's KinectPV2 library (which you can easily do via Sketch > Import Library > Add Library > (search) Kinect v2 for Processing. It may not have the same features as the ufdw library, but the instructions are clear and you can definitely get a point cloud.

            Source https://stackoverflow.com/questions/67764093

            QUESTION

            PyKinectv2 Body tracking count
            Asked 2021-May-19 at 00:19

            New to pykinect and kinect in general -- trying to simply get a count of bodies currently being tracked. No skeletal or joint data required. Just want to get a running count of bodies currently in frame. I am using a kinect-v2 and pykinect2.

            Being more specific, I'm trying to track how many bodies are in frame and the time elapsed since that value changed. (0 people to 1 person, 1 person to 2, etc.) Due to the built examples for pykinect and the way that they loop, this has proven difficult however. The latest attempt (Now updated with the solved code):

            ...

            ANSWER

            Answered 2021-May-13 at 07:28

            I found a useful snippet that does what you need within one of the examples provided in the PyKinect2 GitHub repo.

            You need to get the body frame, and then count the number of tracked bodies:

            Source https://stackoverflow.com/questions/67512150

            QUESTION

            Kinect V2 UWP How to calculate distance from user
            Asked 2021-Mar-01 at 22:22

            I am trying to calculate distance from a person to the the Kinect sensor v2 in UWP c#.

            In WPF, I was getting this by

            ...

            ANSWER

            Answered 2021-Mar-01 at 22:22

            I used face detection to get face coordinates from colour image. Mapped those coordinates on IR and depth frame. This way, I found the x and y cords of the face from depth frame.

            Source https://stackoverflow.com/questions/66297100

            QUESTION

            UWP Kinect V2 keep frame rate constant (30fps)
            Asked 2021-Feb-27 at 22:26

            I am processing frames received from Kinect v2 (Color and IR) in UWP. The program runs on remote machine (XBOX One S). The main goal is to get frames and write them to the disk with 30 fps for Color and IR to later process them further.

            I am using the following code to check the frame rate:

            ...

            ANSWER

            Answered 2021-Feb-27 at 22:26

            XBOX One has maximum available memory of 1 GB for Apps and 5 for Games. https://docs.microsoft.com/en-us/windows/uwp/xbox-apps/system-resource-allocation

            While in PC the fps is 30 (as the memory has no such restrictions).

            This causes the frame rate to drop. However, the fps did improve when running it on release mode or published to MS Store.

            Source https://stackoverflow.com/questions/66046498

            QUESTION

            Languages to develop applications for Xbox 360 kinect
            Asked 2021-Feb-03 at 13:26

            I know this sounds stupid and I'm propably very late to the party but here's the thing I want to program an gesture recogniction application (in the likes of this Hand detection or this actual finger detection) for the Xbox 360 Kinect. SDK (version 1.8) is found, installed and works, preliminary research is done - I only forgot to look in which language to write the code. The link from the SDK to the documentation would be the first thing to do but is a dead end, unfortunately.
            From the provided examples it seems either to be C++ or C# although some old posts also claim Java. My question is: Is there a documentation not tied to the SDK and which pitfall are there in regard to developing in this specific case under C++/C#/Java? A post from 2011 barely covers the beginning.

            Addendum: On further looking I was prompted for the Samples site from the developer toolkit - which can be reached, yet all listed and linked examples are dead ends too.

            Addendum: For reference I userd this instruction - ultimately proving futile.

            Found an version of NiTE here

            ...

            ANSWER

            Answered 2021-Jan-19 at 22:29

            I've provided this answer in the past.

            Personally I've used the Xbox360 sensor with OpenNI the most (because it's cross platform). Also the NITE middleware on alongside OpenNI provides some basic hand detection and even gesture detection (swipes, circle gesture, "button" push, etc.).

            While OpenNI is opensource, NITE isn't so you'd be limited to what they provide.

            The links you've shared use OpenCV. You can install OpenNI and compile OpenCV from source with OpenNI support. Alternatively, you can manually wrap the OpenNI frame data into an OpenCV cv::Mat and carry on with the OpenCV operations from there.

            Here's a basic example that uses OpenNI to get the depth data and passes that to OpenCV:

            Source https://stackoverflow.com/questions/65778896

            QUESTION

            python - find duplicates in a column, replace values in another column for that duplicate
            Asked 2021-Jan-14 at 02:33

            I have a dataframe that consists of of video game titles on various platforms. it contains, among other values the name, critic's average score and user's average score. Many of them are missing scores for the user, critic and/or ESRB rating.

            What i'd like to do is replace the missing rating, critic and user scores with those for the same game on a different platform (assuming they exist) i'm not quite sure how to approach this.(note - i don't want to drop the duplicate names, because they aren't truly duplicate rows)

            here is a sample chunk of the dataframe (i've removed some unrelated columns to make it manageable):

            ...

            ANSWER

            Answered 2021-Jan-14 at 02:26

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install Kinect

            Install Git Extensions. This will provide you a good GUI to Git (easy for Git starters), includes GIT. #SSH Key for github repo access In order to connect to your github repository you need a SSH Key, so if you don't have one follow the following instructions.
            Generate a ssh key

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/atosorigin/Kinect.git

          • CLI

            gh repo clone atosorigin/Kinect

          • sshUrl

            git@github.com:atosorigin/Kinect.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link