mtcnn | mtcnn caffe训练和测试代码 -

 by   cvtuge Python Version: Current License: No License

kandi X-RAY | mtcnn Summary

kandi X-RAY | mtcnn Summary

mtcnn is a Python library. mtcnn has no bugs, it has no vulnerabilities and it has low support. However mtcnn build file is not available. You can download it from GitHub.

mtcnn
Support
    Quality
      Security
        License
          Reuse

            kandi-support Support

              mtcnn has a low active ecosystem.
              It has 7 star(s) with 11 fork(s). There are no watchers for this library.
              OutlinedDot
              It had no major release in the last 6 months.
              There are 2 open issues and 0 have been closed. On average issues are closed in 545 days. There are no pull requests.
              It has a neutral sentiment in the developer community.
              The latest version of mtcnn is current.

            kandi-Quality Quality

              mtcnn has no bugs reported.

            kandi-Security Security

              mtcnn has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.

            kandi-License License

              mtcnn does not have a standard license declared.
              Check the repository for any license declaration and review the terms closely.
              OutlinedDot
              Without a license, all rights are reserved, and you cannot use the library in your applications.

            kandi-Reuse Reuse

              mtcnn releases are not available. You will need to build from source code and install.
              mtcnn has no build file. You will be need to create the build yourself to build the component from source.

            Top functions reviewed by kandi - BETA

            kandi has reviewed mtcnn and discovered the below as its top functions. This is intended to give you an instant insight into mtcnn implemented functionality, and help decide if they suit your requirements.
            • Detects a face
            • Pad boxes A
            • Compute NMS of boxes
            • Generate a bounding box for a map
            • Calibrate bounding box
            • Rerecords bbox A
            • Convert dataset to HDF5 format
            • Write data to hdf5 file
            • Read data from file
            • Compute the intersection of two boxes
            • Read samples from a file
            • Draws boxes on the image
            • Writes datas to a sample list
            • Writes the label_list txt file
            Get all kandi verified functions for this library.

            mtcnn Key Features

            No Key Features are available at this moment for mtcnn.

            mtcnn Examples and Code Snippets

            No Code Snippets are available at this moment for mtcnn.

            Community Discussions

            QUESTION

            count Yaw and Roll in Python Head pose estimation MTCNN
            Asked 2021-May-01 at 09:04

            I have encountered implementation of MTCNN network which is able to detect our head movement in 3 axis called Yaw,Roll and Pitch.

            Here are crucial elements:

            ...

            ANSWER

            Answered 2021-May-01 at 09:04

            Yaw, Roll and Pitch are Euler angles - the image below shows a more easy to understand example, it is important to note that the rotations are not in relation to the global axis but are in fact in relation to objects axis that's why a plane is a good thing to think about. Also there are several different formats of Euler angles to understand better look at the wiki

            looking at the github link you provided I have found the following:

            1. the points contain the coordinates of different facial features within the frame, where:

              X=points[0:5] Y=points[5:10]

            2. they do not measure these angles in degrees:

            Roll: -x to x (0 is frontal, positive is clock-wise, negative is anti-clock-wise)

            Yaw: -x to x (0 is frontal, positive is looking right, negative is looking left)

            Pitch: 0 to 4 (0 is looking upward, 1 is looking straight, >1 is looking downward)

            1. the function for Yaw, Roll and Pitch do not return the angle:

            what is returned from Roll is the Y coordinate of the left eye minus the y coordinate of the right eye

            Yaw is essentially calculating which eye the noise is closer to along the x axis - as you turn your head the nose appears closer to one eye from an observer

            1. find_pose might have what you are looking for but I need to do further research into what is meant by xfrontal and yfrontal - you may need to pose the question directly to the person on github

            Update: after posing the question directly to the developer, they have responded with the following:

            Roll, Yaw and Pitch are in pixels and provide an estimate of where the face looking at. In case of mtcnn Roll is (-50 to 50), Yaw is (-100 to 100). Pitch is 0 to 4 because you can divide the distance between eyes and lips into 4 units where one unit is between lips to nose-tip and 3 units between nose-tip to eyes.

            Xfrontal and Yfrontal provide pose (Yaw and Pitch only) in terms of angles (in degrees) along X and Y axis, respectively. These values are obtained after compensating the roll (aligning both the eyes horizontally).

            https://github.com/fisakhan/Face_Pose/issues/2

            Source https://stackoverflow.com/questions/67283184

            QUESTION

            How to save the image with the red bounding boxes on it detected by mtcnn?
            Asked 2021-Apr-28 at 01:21

            I have this code in which mtcnn detects faces on an image, draws a red rectangle around each face and prints on the screen.

            Code taken from: https://machinelearningmastery.com/how-to-perform-face-detection-with-classical-and-deep-learning-methods-in-python-with-keras/

            But I want to save the image with the red boxes arround each face. So that i can do some preprocessing on it. Any help is good.

            ...

            ANSWER

            Answered 2021-Apr-28 at 01:21

            You can use matplotlib.pyplot.savefig. For example:

            Source https://stackoverflow.com/questions/67284840

            QUESTION

            Activate Virtual environment using c#
            Asked 2020-Dec-08 at 20:55

            Is it possible to activate virtual environment created without using the anaconda prompt? For instance, I want to activate my virtual environment in c# in order to execute my python code.

            This is my current code

            ...

            ANSWER

            Answered 2020-Dec-08 at 20:55

            I found a way to do this but I still have to open a command prompt.

            First, I created a console project in visual studio. Then, I turned the console into Windows application by going to the project's properties and change the output type to Windows Application. After that change your code from

            Source https://stackoverflow.com/questions/65142281

            QUESTION

            Setting a protobuf item's value replaces a previously set variable as well
            Asked 2020-Dec-05 at 06:13

            I have created a simple Protobuf based config file and have been using it without any issues until now. The issue is that I added two new items to my settings (Config.proto) and now whatever value I set for the last variable is reflected in the previous one.

            The following snapshot demonstrates this better. As you can see below, the value of fv_shape_predictor_path and fv_eyenet_path depend solely on order of being set. the one that is set last changes the others value.

            I made sure the cpp files related to Config.proto are built afresh. I also tested this under Linux and there it works just fine. It seems its only happening in windows! it also doesn't affect any other items in the same settings. its just these two new ones.

            I have no idea what is causing this or how to go about it. For reference this is how the protobuf looks like:

            ...

            ANSWER

            Answered 2020-Dec-05 at 06:13

            This issue seems to only exist in the Windows version of Protobuf 3.11.4 (didn't test with any newer version though).

            Basically what happened was that I use to first create a Config object and initialize it with some default values. When I added these two entries to my Config.proto, I forgot to also add an initialization entry like other entries, thinking I'm fine with the default (which I assumed would be "").

            This doesn't pose any issues under Linux/G++ and the program builds and runs just fine and works as intended. However under Windows this results in the behavior you just witnessed i.e. setting any of those newly added entries, would also set the other entries values. So basically I either had to create a whole new Config object or had to explicitly set their values when using the load_default_config.

            To be more concrete this is the snippet I used for setting some default values in my Protobuf configs.

            These reside in a separate header called Utility.h:

            Source https://stackoverflow.com/questions/64968272

            QUESTION

            Python OpenCV LoadDatasetList, what goes into last two parameters?
            Asked 2020-Oct-04 at 14:33

            I am currently trying to train a dataset using OpenCV 4.2.2, I scoured the web but there are only examples for 2 params. OpenCV 4.2.2 loadDatasetList requires 4 parameters but there have been shortcomings which I did my best to overcome with the following. I tried with an array at first but loadDatasetList complained that the array was not iterable, I then proceeded to the code below with no luck. Any help is appreciated thank you for your time, and hope everyone is being safe and well.

            The prior error passing in an array without iter()

            PS E:\MTCNN> python kazemi-train.py No valid input file was given, please check the given filename. Traceback (most recent call last): File "kazemi-train.py", line 35, in status, images_train, landmarks_train = cv2.face.loadDatasetList(args.training_images,args.training_annotations, imageFiles, annotationFiles) TypeError: cannot unpack non-iterable bool object

            The current error is:

            PS E:\MTCNN> python kazemi-train.py Traceback (most recent call last): File "kazemi-train.py", line 35, in status, images_train, landmarks_train = cv2.face.loadDatasetList(args.training_images,args.training_annotations, iter(imageFiles), iter(annotationFiles)) SystemError: returned NULL without setting an error

            ...

            ANSWER

            Answered 2020-Oct-04 at 07:37

            The figure you provided refers to the C++ API of loadDatasetList(), whose parameters usually cannot be mapped to that of Python API in many cases. One reason is that a Python function can return multiple values while C++ cannot. In the C++ API, the 3rd and 4th parameters are provided to store the output of the function. They store the paths of the images after reading from the text file at imageList, and the paths of the annotations by reading another text file at annotationList respectively.

            Going back to your question, I cannot find any reference for that function in Python. And I believe the API is changed in OpenCV 4. After multiple trials, I am sure cv2.face.loadDatasetList returns only one Boolean value, rather than a tuple. That's why you encountered the first error TypeError: cannot unpack non-iterable bool object even though you filled in four parameters.

            There is no doubt that cv2.face.loadDatasetList should produce two lists of file paths. Therefore, the code for the first part should look something like this:

            Source https://stackoverflow.com/questions/64165147

            QUESTION

            How to store images with bounding box in a folder using python google colab?
            Asked 2020-Sep-12 at 06:55

            I am trying to save the images after detecting faces in them. I tried using matplotlib.pyplot.savefig(img, bbox_inches='tight', pad_inches=0) and this stored images but all the images stored are blank white images. The output I get

            what I want it to store is:- The Output I expect

            ...

            ANSWER

            Answered 2020-Sep-12 at 06:55

            QUESTION

            How to properly convert a cv::Mat into a torch::Tensor with perfect match of values?
            Asked 2020-Aug-30 at 14:20

            I am trying to run inference on a jit traced model in C++ and currently the output I get in Python is different than the output I get in C++.

            Initially I thought this be caused by the jit model itself, but now I don't think so, as I spotted some small deviations in the input tensor in the C++ code. I believe I did everything as instructed by the documentation so that might as well show an issue in torch::from_blob. I'm not sure!

            Therefore in order to make sure which is the case, here are the snippets both in Python and C++ plus the sample input to test it.

            Here is the sample image:

            For Pytorch run the following snippet of code:

            ...

            ANSWER

            Answered 2020-Aug-30 at 14:20

            Its being made clear that this is indeed an input issue and more specifically this is because the image is first read by PIL.Image.open in Python and later changed into a numpy array. If the image is read with OpenCV, then, everything input-wise, is the same both in Python and C++.

            More explanation

            However, in my specific case, using the OpenCV image results in a minor change in the final result. The only way this change/difference is minimized, is when I make the Opencv image grayscale and feed it to the network in which case, both the PIL input and opencv input have nearly identical output.

            Here are the two example, the pil image is bgr and the opencv is in grayscale mode: you need to save them on disk and see that the are nearly identical (left is cv_image, right is pil_image):

            However, if I simply don't convert the opencv image into grayscale mode (and back to bgr to get 3 channels), this is how it looks (left is cv_image and right is pil_image):

            Update

            This turned out to be again input related. the reason we had slight differences was due to the model being trained on rgb images and thus channels order mattered. When using PIL image, there were some conversions happening back and forth for different methods and thus it caused the whole thing to be a mess that you earlier read about above.

            To cut a long story short, there was not any issue regarding the conversion from cv::Mat into a torch::Tensor or vice versa, the issue was in the way the images were created and fed to the network differently in Python and C++. When both Python and C++ backend, used OpenCV for dealing with images, their output and result matched 100%.

            Source https://stackoverflow.com/questions/63554819

            QUESTION

            MTCNN does not use GPU on first detection but does on following detections
            Asked 2020-Jul-26 at 18:22

            I have tensorflow 2.0 -gpu installed. I am doing face detection using MTCNN. On the first call to detect the face it takes 3.86 seconds. On the next call it takes only .049 seconds. I suspect it is not using the GPU on the first call but it does on the second call. I know MTCNN does import tensorflow but I do not understand why the GPU is not used on the first call. Code is below.

            ...

            ANSWER

            Answered 2020-Jul-26 at 18:22

            It does use GPU on the first call. The main overhead in the allocation of model's parameters and creation of the computation graph in memory. You can use a small "dummy" image first (it doesn't have to be full-size) to allow the ops to be formed and variables to be placed on the GPU, then continue using the actual images.

            Source https://stackoverflow.com/questions/63103706

            QUESTION

            Unexpected error when loading the model: problem in predictor - ModuleNotFoundError: No module named 'torchvision'
            Asked 2020-May-28 at 18:00

            I've been trying to deploy my model to the AI platform for Prediction through the console on my vm instance, but I've gotten the error "(gcloud.beta.ai-platform.versions.create) Create Version failed. Bad model detected with error: "Failed to load model: Unexpected error when loading the model: problem in predictor - ModuleNotFoundError: No module named 'torchvision' (Error code: 0)"

            I need to include both torch and torchvision. I followed the steps in this question Cannot deploy trained model to Google Cloud Ai-Platform with custom prediction routine: Model requires more memory than allowed, but I couldn't fetch the files pointed to by user gogasca. I tried downloading this .whl file from Pytorch website and uploading it to my cloud storage but got the same error that there is no module torchvision, even though this version is supposed to include both torch and torchvision. Also tried using Cloud AI compatible packages here, but they don't include torchvision.

            I tried pointing to two separate .whl files for torch and torchvision in the --package-uris arguments, those point to files in my cloud storage, but then I got the error that the memory capacity was exceeded. This is strange, because collectively their size is around 130Mb. An example of my command that resulted in absence of torchvision looked like this:

            ...

            ANSWER

            Answered 2020-May-23 at 15:19

            The solution was to place the following packages in thsetup.py file for the custom prediction code:

            Source https://stackoverflow.com/questions/61933879

            QUESTION

            ValueError: sampler option is mutually exclusive with shuffle pytorch
            Asked 2020-May-13 at 13:03

            i'm working on face recognition project using pytorch and mtcnn and after trained my training dataset , now i want to make prediction on test data set

            this my trained code

            ...

            ANSWER

            Answered 2020-Apr-04 at 19:49

            I'm not sure what format you test data is in but to select a sample randomly from your dataset, you can use random.choice from the module random.

            Source https://stackoverflow.com/questions/61033726

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network

            Vulnerabilities

            No vulnerabilities reported

            Install mtcnn

            You can download it from GitHub.
            You can use mtcnn like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.

            Support

            For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
            CLONE
          • HTTPS

            https://github.com/cvtuge/mtcnn.git

          • CLI

            gh repo clone cvtuge/mtcnn

          • sshUrl

            git@github.com:cvtuge/mtcnn.git

          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link