Education: Starter Kit - Proctoring Assistant
by kandikits Updated: Sep 29, 2022
A proctor is the person who supervises the exam room to ensure that everyone follows the rules, and no one is cheating. The demand for online proctoring tool is increasing because of the popularity of exams administered by computers. Online proctor tools monitor testing similar to regular proctors and often use webcams to ensure test takers aren't cheating. Online Proctor can flag cases including but not limited to incidences in which a candidate is not appearing on the screen or if a mobile phone is detected, or even unusual movement of the eye or any additional person in the room. In this challenge, we are inviting to build a solution for proctoring assistance. You can configure/customize to identify objects and add rules based on respective object detection to create a proctoring report with a pass or fail indication. Please see below a sample solution kit to jumpstart your solution on creating a proctoring application. To use this kit to build your own solution, scroll down to refer sections Kit Deployment Instructions and Instruction to Run. Complexity : Medium The key characteristics of sample kit are: -Detects objects on images -Detects and Tracks objects on streaming videos
VSCode and Jupyter Notebook can be used for development and debugging. Jupyter Notebook is a web-based interactive environment often used for experiments, whereas VSCode is used to get a typical experience of IDE for developers.
Jupyter metapackage for installation, docs and chat
Python 14381 Version:Current License: Permissive (BSD-3-Clause)
Machine Learning Libraries
The following libraries could be used to create machine learning models which focus on the vision, extraction of data, image processing, and more. Thus making it handy for the users.
The fundamental package for scientific computing with Python.
Python 23587 Version:v1.24.3 License: Permissive (BSD-3-Clause)
Flexible and powerful data analysis / manipulation library for Python, providing labeled data structures similar to R data.frame objects, statistical functions, and much more
Python 38499 Version:v2.0.2 License: Permissive (BSD-3-Clause)
Object Detection and Tracking
The following libraries have a set of pre-trained models which could be used to identify objects and track them from live streaming videos.
Access dict values as attributes (works recursively)
Python 169 Version:1.7 License: Weak Copyleft (LGPL-3.0)
YOLOv5 🚀 in PyTorch > ONNX > CoreML > TFLite
Python 38884 Version:v7.0 License: Strong Copyleft (AGPL-3.0)
Tensors and Dynamic neural networks in Python with strong GPU acceleration
Python 67288 Version:v2.0.1 License: Others (Non-SPDX)
Kit Solution Source
Real-time multi-object tracker using YOLO v5 and deep sort
Python 2442 Version:v5.0 License: Strong Copyleft (GPL-3.0)
For Windows OS, Download, extract and double-click kit_installer file to install the kit. Note: Do ensure to extract the zip file before running it. The installation may take from 2 to 10 minutes based on bandwidth. 1. When you're prompted during the installation of the kit, press Y to launch the app automatically and run the notebook by executing Proctoring video file & Saving Tracking Results cell in the notebook by clicking Run button below the Menu bar. 2. To run the app manually, press N when you're prompted and locate the zip file Yolov5_DeepSort_Pytorch.zip 3. Extract the zip file and navigate to the directory Yolov5_DeepSort_Pytorch 4. Open command prompt in the extracted directory Yolov5_DeepSort_Pytorch and run the command jupyter notebook For other Operating System, 1. Click here to install python 2. Click here to download the repository 3. Extract the zip file and navigate to the directory Yolov5_DeepSort_Pytorch 4. Open terminal in the extracted directory Yolov5_DeepSort_Pytorch 5. Install dependencies by executing the command pip install -r requirements.txt 6. Run the command jupyter notebook
Instruction to Run
Follow the below instructions to run the solution. 1. Locate and open the YOLOV5_DEEPSORT_NOTEBOOK.ipynb notebook from the Jupyter Notebook browser window. 2. Execute Proctoring video file & Saving Tracking Results cell in the notebook by clicking Run button below the Menu bar. 3. Enter key "q" in the keyboard to stop tracking objects. For configuring this to analyse a video, 1. Go to Yolov5_DeepSort_Pytorch/yolov5/data directory from the kit_installer.bat location. 2. Place your video file with the name Demo.mp4. If it already exists rename the existing file. Video files in formats such as mov, avi, mp4, mpg, mpeg, m4v, wmv, mkv can be used for tracking. Make sure you specify the right path for video format. 3. Execute Proctoring video file & Saving Tracking Results cell in the notebook by clicking Run button below the Menu bar. 4. Enter key "q" in the keyboard to stop tracking objects. 5. Output file will be saved in the directory locationYolov5_DeepSort_Pytorch/runs/track/expN . You can find generated output file by sorting according to Date Modified in the specified directory. For Your Reference: A sample output text file, video file is available under Yolov5_DeepSort_Pytorch/runs/track/expN directory.
More about text file:
Each row of a text file will have ten values which is represented as the following:
- Column 1 - video frame - Column 2 - unique id of an object - Column 3,4 - left and top directions of a bounded box, respectively - Column 5,6 - height and width of a bounded box, respectively - Column 7 - values other than 0 can be used to display it as active - Column 8,9,10 - only 2D videos have been used for tracking, hence remaining columns will not be considered as it denotes x,y,z coordinates, will remain -1 only. You can additionally build rules for proctoring and other enhancements for additional score. For any support, you can direct message us at #help-with-kandi-kits
1. While running batch file, if you encounter Windows protection alert, select More info --> Run anyway 2. During kit installer, if you encounter Windows security alert, click Allow