DeepSort | AI powered image tagger backed by DeepDetect | Machine Learning library

 by   CorentinB Go Version: v1.3.2 License: GPL-3.0

kandi X-RAY | DeepSort Summary

kandi X-RAY | DeepSort Summary

DeepSort is a Go library typically used in Artificial Intelligence, Machine Learning, Deep Learning applications. DeepSort has no bugs, it has no vulnerabilities, it has a Strong Copyleft License and it has low support. You can download it from GitHub.

AI powered image tagger backed by DeepDetect.
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              DeepSort has a low active ecosystem.
              It has 212 star(s) with 15 fork(s). There are 10 watchers for this library.
              OutlinedDot
              It had no major release in the last 12 months.
              There are 1 open issues and 3 have been closed. On average issues are closed in 8 days. There are no pull requests.
              It has a neutral sentiment in the developer community.
              The latest version of DeepSort is v1.3.2

            kandi-Quality Quality

              DeepSort has no bugs reported.

            kandi-Security Security

              DeepSort has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.

            kandi-License License

              DeepSort is licensed under the GPL-3.0 License. This license is Strong Copyleft.
              Strong Copyleft licenses enforce sharing, and you can use them when creating open source projects.

            kandi-Reuse Reuse

              DeepSort releases are available to install and integrate.
              Installation instructions, examples and code snippets are available.

            Top functions reviewed by kandi - BETA

            kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
            Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of DeepSort
            Get all kandi verified functions for this library.

            DeepSort Key Features

            No Key Features are available at this moment for DeepSort.

            DeepSort Examples and Code Snippets

            No Code Snippets are available at this moment for DeepSort.

            Community Discussions

            QUESTION

            How do I get the velocity of a tracked object without calibration?
            Asked 2021-Apr-12 at 14:03

            I am using YoloV4 and Deepsort to detect and track people in a frame.

            My goal is to get speed of a moving person in meaningful units without calibration as I would like to be able to move the camera with this model to different rooms without having to calibrate it each time. I understand this to be a very difficult problem. I am currently getting speed as pixels per second. But that is inaccurate as items closer to frame are "moving" faster.

            My question is if I can use the bounding box of the person detection as a measurement of the size of a person in pixels and if I can average the size of a human being (say 68 inches height by 15 inches width) and have the necessary "calibration" metrics to determine in inches/s the object moved from Point A to Point B in the frame as a reflection of the size of the person from Region A to Region B?

            In short, is there a way to get velocity from the size of an object to determine how fast it is moving in a frame?

            Any suggestions would be helpful! Thanks!

            This is how I am calculating speed now.

            ...

            ANSWER

            Answered 2021-Mar-05 at 20:57

            I think this is the answer I have been looking for.

            I calculate the height and width of the bounding box. I get the pixels per inch of that bounding box by dividing it by the average human height and width. And then I sum the linspace() between the pixels per inch of Region A to the pixels per inch of Region B to get the distance. It's not very accurate though so maybe I can improve on that somehow.

            Mainly the inaccuracies will come from the bounding box. It looks like top to bottom the bounding box is pretty good but left to right (width) it's not good as it's taking into account the arms and legs. I am going to see if I can use just a human head detection as a measurement.

            Source https://stackoverflow.com/questions/66356157

            QUESTION

            Object detection model for detecting rectangular shape (text cursor) in a video?
            Asked 2021-Apr-04 at 17:57

            I'm currently doing some research to detect and locate a text-cursor (you know, the blinking rectangle shape that indicates the character position when you type on your computer) from a screen-record video. To do that, I've trained YOLOv4 model with custom object dataset (I took a reference from here) and planning to also implement DeepSORT to track the moving cursor.

            Here's the example of training data I used to train YOLOv4:

            Here's what I want to achieve:

            Do you think using YOLOv4 + DeepSORT is considered overkill for this task? I'm asking because as of now, only 70%-80% of the video frame that contains the text-cursor can be successfully detected by the model. If it is overkill after all, do you know any other method that can be implemented for this task?

            Anyway, I'm planning to detect the text-cursor not only from Visual Studio Code window, but also from Browser (e.g., Google Chrome) and Text Processor (e.g., Microsoft Word) as well. Something like this:

            I'm considering the Sliding Window method as an alternative, but from what I've read, the method might consume much resources and perform slower. I'm also considering Template Matching from OpenCV (like this), but I don't think it will perform better and faster than the YOLOv4.

            The constraint is about the performance speed (i.e, how many frames can be processed given amount of time) and the detection accuracy (i.e, I want to avoid letter 'l' or '1' detected as the text-cursor, since those characters are similar in some font). But higher accuracy with slower FPS is acceptable I think.

            I'm currently using Python, Tensorflow, and OpenCV for this. Thank you very much!

            ...

            ANSWER

            Answered 2021-Apr-04 at 17:57

            This would work if the cursor is the only moving object on the screen. Here is the before and after:

            Before:

            After:

            The code:

            Source https://stackoverflow.com/questions/66929833

            QUESTION

            Problem regarding tensorflow in Jetson nano(ubuntu)
            Asked 2021-Feb-10 at 18:22

            Okay... So I'm trying to run this repo on my jetson nano 2gb...

            I followed this official documentation for installation for tensorflow on jetson nano, while I followed https://www.tensorflow.org/install, this official documentation to run tensorflow on windows..

            So after setting the dependencies on windows...the started to run smoothly with no problem.

            But after setting the dependencies on jetson nano... the code showed this error..

            ...

            ANSWER

            Answered 2021-Feb-10 at 18:22

            Just add this line at the starting of the object_tracker.py file and before importing tensorflow:

            Source https://stackoverflow.com/questions/66120992

            QUESTION

            how to have full control over a process (start/terminate) which runs in parallel with flask application?
            Asked 2020-May-22 at 22:35

            This is my application architecture:

            In my code there is a pedestrian.py file which uses a while loop to read frames from rtsp link and after doing pedestrian detection process (available in this link), it caches the frame in Redis.

            (please note that in the loop each time the output frame is replaced with the previous output from loop. it means that there exists only one frame in redis in any moment.)

            Then in flask application, I read processed frame from redis and send it for the clients.

            This is the code for my pedestrian detection:

            ...

            ANSWER

            Answered 2020-May-22 at 22:35

            I found a way to automate start/stop the pedestrian detection. more details available in my repo:

            from os.path import join from os import getenv, environ from dotenv import load_dotenv import argparse from threading import Thread

            Source https://stackoverflow.com/questions/61747492

            QUESTION

            Does the Object Detector in DeepSORT object tracking framework run over each and every frame of a video?
            Asked 2019-Sep-27 at 14:58

            I am trying to track objects using the DeepSORT algorithm described in this paper. What I have understood is that, the there are two deep-learning models at work here. One is the object detector (maybe YoLo etc) and the other is a feature extractor. The object detector tries to detect the presence of the object in a frame, while the feature extractor helps to identify if the current detected object has already been detected previously and if so, it assigns the detected object to the corresponding track.

            However, one thing I fail to understand is that when does the Object Detector run? Yes, it should run on the first frame, but after that, does it run only after every nth frames? OR does it run on each frame, but only on the apporximate location predicted by the tracker.

            Thanks.

            ...

            ANSWER

            Answered 2019-Sep-27 at 14:58

            The object detector (or YOLO) runs on each frame of the video and is independent of the tracker i.e. DeepSORT. The DeepSORT uses the detections from the object detector for every frame and tries to associate it with the detections in the previous frame.

            It is during this association when the DeepSORT's feature extractor is used in addition to the Hungarian algorithm to provide best association and tracking results.

            You can find a detailed explanation on:

            1. https://towardsdatascience.com/people-tracking-using-deep-learning-5c90d43774be
            2. https://towardsdatascience.com/computer-vision-for-tracking-8220759eee85

            Source https://stackoverflow.com/questions/57961943

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install DeepSort

            You need DeepDetect installed, the easiest way is using docker:. Right now, the only supported installation of DeepDetect that works with DeepSort is the deepdetect_cpu container, because it contain the good path for the pre-installed resnet-50 and googlenet models. Then, download the latest DeepSort release from https://github.com/CorentinB/DeepSort/releases.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/CorentinB/DeepSort.git

          • CLI

            gh repo clone CorentinB/DeepSort

          • sshUrl

            git@github.com:CorentinB/DeepSort.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link