A proctor is the person who supervises the exam room to ensure that everyone follows the rules, and no one is cheating. The demand for online proctoring tool is increasing because of the popularity of exams administered by computers. Online proctor tools monitor testing similar to regular proctors and often use webcams to ensure test takers aren't cheating. Online Proctor can flag cases including but not limited to incidences in which a candidate is not appearing on the screen or if a mobile phone is detected, or even unusual movement of the eye or any additional person in the room. In this challenge, we are inviting to build a solution for proctoring assistance. You can configure/customize to identify objects and add rules based on respective object detection to create a proctoring report with a pass or fail indication. Please see below a sample solution kit to jumpstart your solution on creating a proctoring application. To use this kit to build your own solution, scroll down to refer sections Kit Deployment Instructions and Instruction to Run. Complexity : Medium The key characteristics of sample kit are: -Detects objects on images -Detects and Tracks objects on streaming videos
VSCode and Jupyter Notebook can be used for development and debugging. Jupyter Notebook is a web-based interactive environment often used for experiments, whereas VSCode is used to get a typical experience of IDE for developers.
Machine Learning Libraries
The following libraries could be used to create machine learning models which focus on the vision, extraction of data, image processing, and more. Thus making it handy for the users.
Python 36647 Version:1.5.2 License: Permissive (BSD-3-Clause)
Object Detection and Tracking
The following libraries have a set of pre-trained models which could be used to identify objects and track them from live streaming videos.
Instruction to Run
Follow the below instructions to run the solution. 1. Locate and open the YOLOV5_DEEPSORT_NOTEBOOK.ipynb notebook from the Jupyter Notebook browser window. 2. Execute Proctoring video file & Saving Tracking Results cell in the notebook by clicking Run button below the Menu bar. 3. Enter key "q" in the keyboard to stop tracking objects. For configuring this to analyse a video, 1. Go to Yolov5_DeepSort_Pytorch/yolov5/data directory from the kit_installer.bat location. 2. Place your video file with the name Demo.mp4. If it already exists rename the existing file. Video files in formats such as mov, avi, mp4, mpg, mpeg, m4v, wmv, mkv can be used for tracking. Make sure you specify the right path for video format. 3. Execute Proctoring video file & Saving Tracking Results cell in the notebook by clicking Run button below the Menu bar. 4. Enter key "q" in the keyboard to stop tracking objects. 5. Output file will be saved in the directory locationYolov5_DeepSort_Pytorch/runs/track/expN . You can find generated output file by sorting according to Date Modified in the specified directory. For Your Reference: A sample output text file, video file is available under Yolov5_DeepSort_Pytorch/runs/track/expN directory.
More about text file:
Each row of a text file will have ten values which is represented as the following:
- Column 1 - video frame - Column 2 - unique id of an object - Column 3,4 - left and top directions of a bounded box, respectively - Column 5,6 - height and width of a bounded box, respectively - Column 7 - values other than 0 can be used to display it as active - Column 8,9,10 - only 2D videos have been used for tracking, hence remaining columns will not be considered as it denotes x,y,z coordinates, will remain -1 only. You can additionally build rules for proctoring and other enhancements for additional score. For any support, you can direct message us at #help-with-kandi-kits
1. While running batch file, if you encounter Windows protection alert, select More info --> Run anyway 2. During kit installer, if you encounter Windows security alert, click Allow
Open Weaver – Develop Applications Faster with Open Source