kandi background
Explore Kits

mtcnn | MTCNN face detection implementation | Computer Vision library

 by   ipazc Jupyter Notebook Version: Current License: MIT

 by   ipazc Jupyter Notebook Version: Current License: MIT

Download this library from

kandi X-RAY | mtcnn Summary

mtcnn is a Jupyter Notebook library typically used in Artificial Intelligence, Computer Vision, Deep Learning, Tensorflow applications. mtcnn has no vulnerabilities, it has a Permissive License and it has medium support. However mtcnn has 1 bugs. You can download it from GitHub.
MTCNN face detection implementation for TensorFlow, as a PIP package.
Support
Support
Quality
Quality
Security
Security
License
License
Reuse
Reuse

kandi-support Support

  • mtcnn has a medium active ecosystem.
  • It has 1602 star(s) with 427 fork(s). There are 42 watchers for this library.
  • It had no major release in the last 12 months.
  • There are 59 open issues and 37 have been closed. On average issues are closed in 95 days. There are 3 open pull requests and 0 closed requests.
  • It has a neutral sentiment in the developer community.
  • The latest version of mtcnn is current.
mtcnn Support
Best in #Computer Vision
Average in #Computer Vision
mtcnn Support
Best in #Computer Vision
Average in #Computer Vision

quality kandi Quality

  • mtcnn has 1 bugs (0 blocker, 0 critical, 1 major, 0 minor) and 7 code smells.
mtcnn Quality
Best in #Computer Vision
Average in #Computer Vision
mtcnn Quality
Best in #Computer Vision
Average in #Computer Vision

securitySecurity

  • mtcnn has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
  • mtcnn code analysis shows 0 unresolved vulnerabilities.
  • There are 1 security hotspots that need review.
mtcnn Security
Best in #Computer Vision
Average in #Computer Vision
mtcnn Security
Best in #Computer Vision
Average in #Computer Vision

license License

  • mtcnn is licensed under the MIT License. This license is Permissive.
  • Permissive licenses have the least restrictions, and you can use them in most projects.
mtcnn License
Best in #Computer Vision
Average in #Computer Vision
mtcnn License
Best in #Computer Vision
Average in #Computer Vision

buildReuse

  • mtcnn releases are not available. You will need to build from source code and install.
  • mtcnn saves you 248 person hours of effort in developing the same functionality from scratch.
  • It has 604 lines of code, 46 functions and 12 files.
  • It has medium code complexity. Code complexity directly impacts maintainability of the code.
mtcnn Reuse
Best in #Computer Vision
Average in #Computer Vision
mtcnn Reuse
Best in #Computer Vision
Average in #Computer Vision
Top functions reviewed by kandi - BETA

kandi has reviewed mtcnn and discovered the below as its top functions. This is intended to give you an instant insight into mtcnn implemented functionality, and help decide if they suit your requirements.

  • Stage 3 .
  • Stage 1 .
  • Create a new convolution for a given input layer .
  • Build an OLU layer
  • Builds the rnet .
  • Create max pool .
  • Sets the weights .
  • Get a layer .
  • Reads the README .
  • Updates the shape with the given pad result .

mtcnn Key Features

MTCNN face detection implementation for TensorFlow, as a PIP package.

Sorting a tensor list in ascending order

copy iconCopydownload iconDownload
namestodistance = [('Alice', .1), ('Bob', .3), ('Carrie', .2)]
names_top = sorted(namestodistance, key=lambda x: x[1])
print(names_top[:2])
namestodistance = list(map(lambda x: (x[0], x[1].item()), namestodistance)
names_top = sorted(namestodistance, key=lambda x: x[1])
print(names_top[:2])
-----------------------
namestodistance = [('Alice', .1), ('Bob', .3), ('Carrie', .2)]
names_top = sorted(namestodistance, key=lambda x: x[1])
print(names_top[:2])
namestodistance = list(map(lambda x: (x[0], x[1].item()), namestodistance)
names_top = sorted(namestodistance, key=lambda x: x[1])
print(names_top[:2])

Mtcnn face-extractor for head extraction

copy iconCopydownload iconDownload
import mtcnn

img = cv2.imread('images/'+path_res)

faces = []
for result in detector.detect_faces(img):
    x, y, w, h = result['box']

    b = max(0, y - (h//2))
    d = min(img.shape[0], (y+h) + (h//2))
    a = max(0, x - (w//2):(x+w))
    c = min(img.shape[1], (x+w) + (w//2))
    
    face = img[b:d, a:c, :]
    faces.append(face)

How to save the image with the red bounding boxes on it detected by mtcnn?

copy iconCopydownload iconDownload
# save the plot
plt.savefig('image_with_box.jpg')
# show the plot
pyplot.show()

Activate Virtual environment using c#

copy iconCopydownload iconDownload
class ProjectTitle
public partial class ProjectTitle : Form
 static void Main(string[] args)
    {
        Application.EnableVisualStyles();
        Application.SetCompatibleTextRenderingDefault(false);
        Application.Run(new Program());
    }
public Program()
    {
        InitializeComponent();
    }
        Process process = new Process();
        process.StartInfo.FileName = "cmd.exe";
        process.StartInfo.CreateNoWindow = true;
        process.StartInfo.RedirectStandardInput = true;
        process.StartInfo.RedirectStandardOutput = true;
        process.StartInfo.UseShellExecute = false;
        process.Start();
        process.StandardInput.WriteLine("activate virtualenvName");
        process.StandardInput.WriteLine("cd C:\\PathWhereYourPythonIsLocated");
        process.StandardInput.WriteLine("python hello.py");
        process.StandardInput.Flush();
        process.StandardInput.Close();
        Console.WriteLine(process.StandardOutput.ReadToEnd());
        Console.ReadKey();
-----------------------
class ProjectTitle
public partial class ProjectTitle : Form
 static void Main(string[] args)
    {
        Application.EnableVisualStyles();
        Application.SetCompatibleTextRenderingDefault(false);
        Application.Run(new Program());
    }
public Program()
    {
        InitializeComponent();
    }
        Process process = new Process();
        process.StartInfo.FileName = "cmd.exe";
        process.StartInfo.CreateNoWindow = true;
        process.StartInfo.RedirectStandardInput = true;
        process.StartInfo.RedirectStandardOutput = true;
        process.StartInfo.UseShellExecute = false;
        process.Start();
        process.StandardInput.WriteLine("activate virtualenvName");
        process.StandardInput.WriteLine("cd C:\\PathWhereYourPythonIsLocated");
        process.StandardInput.WriteLine("python hello.py");
        process.StandardInput.Flush();
        process.StandardInput.Close();
        Console.WriteLine(process.StandardOutput.ReadToEnd());
        Console.ReadKey();
-----------------------
class ProjectTitle
public partial class ProjectTitle : Form
 static void Main(string[] args)
    {
        Application.EnableVisualStyles();
        Application.SetCompatibleTextRenderingDefault(false);
        Application.Run(new Program());
    }
public Program()
    {
        InitializeComponent();
    }
        Process process = new Process();
        process.StartInfo.FileName = "cmd.exe";
        process.StartInfo.CreateNoWindow = true;
        process.StartInfo.RedirectStandardInput = true;
        process.StartInfo.RedirectStandardOutput = true;
        process.StartInfo.UseShellExecute = false;
        process.Start();
        process.StandardInput.WriteLine("activate virtualenvName");
        process.StandardInput.WriteLine("cd C:\\PathWhereYourPythonIsLocated");
        process.StandardInput.WriteLine("python hello.py");
        process.StandardInput.Flush();
        process.StandardInput.Close();
        Console.WriteLine(process.StandardOutput.ReadToEnd());
        Console.ReadKey();
-----------------------
class ProjectTitle
public partial class ProjectTitle : Form
 static void Main(string[] args)
    {
        Application.EnableVisualStyles();
        Application.SetCompatibleTextRenderingDefault(false);
        Application.Run(new Program());
    }
public Program()
    {
        InitializeComponent();
    }
        Process process = new Process();
        process.StartInfo.FileName = "cmd.exe";
        process.StartInfo.CreateNoWindow = true;
        process.StartInfo.RedirectStandardInput = true;
        process.StartInfo.RedirectStandardOutput = true;
        process.StartInfo.UseShellExecute = false;
        process.Start();
        process.StandardInput.WriteLine("activate virtualenvName");
        process.StandardInput.WriteLine("cd C:\\PathWhereYourPythonIsLocated");
        process.StandardInput.WriteLine("python hello.py");
        process.StandardInput.Flush();
        process.StandardInput.Close();
        Console.WriteLine(process.StandardOutput.ReadToEnd());
        Console.ReadKey();
-----------------------
class ProjectTitle
public partial class ProjectTitle : Form
 static void Main(string[] args)
    {
        Application.EnableVisualStyles();
        Application.SetCompatibleTextRenderingDefault(false);
        Application.Run(new Program());
    }
public Program()
    {
        InitializeComponent();
    }
        Process process = new Process();
        process.StartInfo.FileName = "cmd.exe";
        process.StartInfo.CreateNoWindow = true;
        process.StartInfo.RedirectStandardInput = true;
        process.StartInfo.RedirectStandardOutput = true;
        process.StartInfo.UseShellExecute = false;
        process.Start();
        process.StandardInput.WriteLine("activate virtualenvName");
        process.StandardInput.WriteLine("cd C:\\PathWhereYourPythonIsLocated");
        process.StandardInput.WriteLine("python hello.py");
        process.StandardInput.Flush();
        process.StandardInput.Close();
        Console.WriteLine(process.StandardOutput.ReadToEnd());
        Console.ReadKey();

Setting a protobuf item's value replaces a previously set variable as well

copy iconCopydownload iconDownload
   inline FVConfig::Config load_default_config()
    {
        FVConfig::Config baseCfg;
        auto config = baseCfg.mutable_configuration();
        load_fv_default_settings(config->mutable_fv());
        load_mtcnn_default_settings(config->mutable_mtcnn());
        return baseCfg;
    }

    inline void load_fv_default_settings(FVConfig::Config_FaceVerificationSettings* fv)
    {
        fv->mutable_settings()->set_fv_config_file_path(fv::config_file_path);
        fv->mutable_settings()->set_fv_device(fv::device);
        fv->mutable_settings()->set_fv_rebuild_cache(fv::rebuild_cache);
        ...

        // these two lines were missing previously and to my surprise, this was indeed
        // the cause of the weird behavior.  
        fv->mutable_settings()->set_fv_shape_predictor_path(fv::shape_predictor_path);
        fv->mutable_settings()->set_fv_eyenet_path(fv::eyenet_path);

        auto new_model_list = fv->mutable_model_weights()->mutable_new_models_weights()->Add();
        new_model_list->set_model_name("r18");
        new_model_list->set_description("default");
        new_model_list->set_model_weight_path(fv::model_weights_r18);
    }

    inline void load_mtcnn_default_settings(FVConfig::Config_MTCNNDetectorSettings* mt)
    {
        mt->mutable_settings()->set_mt_device(mtcnn::device);
        mt->mutable_settings()->set_mt_hop(mtcnn::hop);
        ....
    }

Python OpenCV LoadDatasetList, what goes into last two parameters?

copy iconCopydownload iconDownload
images_train = []
landmarks_train = []
status = cv2.face.loadDatasetList(args.training_images, args.training_annotations, images_train, landmarks_train)
def my_loadDatasetList(text_file_images, text_file_annotations):
    status = False
    image_paths, annotation_paths = [], []
    with open(text_file_images, "r") as a_file:
        for line in a_file:
            line = line.strip()
            if line != "":
                image_paths.append(line)
    with open(text_file_annotations, "r") as a_file:
        for line in a_file:
            line = line.strip()
            if line != "":
                annotation_paths.append(line)
    status = len(image_paths) == len(annotation_paths)
    return status, image_paths, annotation_paths
status, images_train, landmarks_train = cv2.face.loadDatasetList(args.training_images,args.training_annotations, iter(imageFiles), iter(annotationFiles))
status, images_train, landmarks_train = my_loadDatasetList(args.training_images, args.training_annotations)
-----------------------
images_train = []
landmarks_train = []
status = cv2.face.loadDatasetList(args.training_images, args.training_annotations, images_train, landmarks_train)
def my_loadDatasetList(text_file_images, text_file_annotations):
    status = False
    image_paths, annotation_paths = [], []
    with open(text_file_images, "r") as a_file:
        for line in a_file:
            line = line.strip()
            if line != "":
                image_paths.append(line)
    with open(text_file_annotations, "r") as a_file:
        for line in a_file:
            line = line.strip()
            if line != "":
                annotation_paths.append(line)
    status = len(image_paths) == len(annotation_paths)
    return status, image_paths, annotation_paths
status, images_train, landmarks_train = cv2.face.loadDatasetList(args.training_images,args.training_annotations, iter(imageFiles), iter(annotationFiles))
status, images_train, landmarks_train = my_loadDatasetList(args.training_images, args.training_annotations)
-----------------------
images_train = []
landmarks_train = []
status = cv2.face.loadDatasetList(args.training_images, args.training_annotations, images_train, landmarks_train)
def my_loadDatasetList(text_file_images, text_file_annotations):
    status = False
    image_paths, annotation_paths = [], []
    with open(text_file_images, "r") as a_file:
        for line in a_file:
            line = line.strip()
            if line != "":
                image_paths.append(line)
    with open(text_file_annotations, "r") as a_file:
        for line in a_file:
            line = line.strip()
            if line != "":
                annotation_paths.append(line)
    status = len(image_paths) == len(annotation_paths)
    return status, image_paths, annotation_paths
status, images_train, landmarks_train = cv2.face.loadDatasetList(args.training_images,args.training_annotations, iter(imageFiles), iter(annotationFiles))
status, images_train, landmarks_train = my_loadDatasetList(args.training_images, args.training_annotations)
-----------------------
images_train = []
landmarks_train = []
status = cv2.face.loadDatasetList(args.training_images, args.training_annotations, images_train, landmarks_train)
def my_loadDatasetList(text_file_images, text_file_annotations):
    status = False
    image_paths, annotation_paths = [], []
    with open(text_file_images, "r") as a_file:
        for line in a_file:
            line = line.strip()
            if line != "":
                image_paths.append(line)
    with open(text_file_annotations, "r") as a_file:
        for line in a_file:
            line = line.strip()
            if line != "":
                annotation_paths.append(line)
    status = len(image_paths) == len(annotation_paths)
    return status, image_paths, annotation_paths
status, images_train, landmarks_train = cv2.face.loadDatasetList(args.training_images,args.training_annotations, iter(imageFiles), iter(annotationFiles))
status, images_train, landmarks_train = my_loadDatasetList(args.training_images, args.training_annotations)
-----------------------
status, images_train, landmarks_train = cv2.face.loadDatasetList(args.training_images,args.training_annotations, iter(imageFiles), iter(annotationFiles))
status = cv2.face.loadDatasetList(args.training_images,args.training_annotations, imageFiles, annotationFiles)
-----------------------
status, images_train, landmarks_train = cv2.face.loadDatasetList(args.training_images,args.training_annotations, iter(imageFiles), iter(annotationFiles))
status = cv2.face.loadDatasetList(args.training_images,args.training_annotations, imageFiles, annotationFiles)

How to store images with bounding box in a folder using python google colab?

copy iconCopydownload iconDownload
pixels = cv2.imread(filename)
pixels = pyplot.imread(filename)
matplotlib.pyplot.savefig(img, bbox_inches='tight', pad_inches=0)
pyplot.savefig('/content/drive/My Drive/TESTBBOX/' + 'image'+ str(i)+'.jpg',bbox_inches='tight')
-----------------------
pixels = cv2.imread(filename)
pixels = pyplot.imread(filename)
matplotlib.pyplot.savefig(img, bbox_inches='tight', pad_inches=0)
pyplot.savefig('/content/drive/My Drive/TESTBBOX/' + 'image'+ str(i)+'.jpg',bbox_inches='tight')
-----------------------
pixels = cv2.imread(filename)
pixels = pyplot.imread(filename)
matplotlib.pyplot.savefig(img, bbox_inches='tight', pad_inches=0)
pyplot.savefig('/content/drive/My Drive/TESTBBOX/' + 'image'+ str(i)+'.jpg',bbox_inches='tight')
-----------------------
pixels = cv2.imread(filename)
pixels = pyplot.imread(filename)
matplotlib.pyplot.savefig(img, bbox_inches='tight', pad_inches=0)
pyplot.savefig('/content/drive/My Drive/TESTBBOX/' + 'image'+ str(i)+'.jpg',bbox_inches='tight')

Unexpected error when loading the model: problem in predictor - ModuleNotFoundError: No module named 'torchvision'

copy iconCopydownload iconDownload
REQUIRED_PACKAGES = ['torchvision==0.5.0', 'torch @ https://download.pytorch.org/whl/cpu/torch-1.4.0%2Bcpu-cp37-cp37m-linux_x86_64.whl', 'opencv-python', 'facenet-pytorch']

ValueError: sampler option is mutually exclusive with shuffle pytorch

copy iconCopydownload iconDownload
train_indices = [0, 2, 3, 4, 5, 6, 9, 10, 12, 13, 15]
val_indices = [1, 7, 8, 11, 14]
val_indices = [1, 8, 11, 7, 14]  # Epoch 1
val_indices = [11, 7, 8, 14, 1]  # Epoch 2
val_indices = [7, 1, 14, 8, 11]  # Epoch 3
module.eval()
with torch.no_grad():
    output = module(dataset[5380])
module.eval()

total_batches = 0
batch_accuracy = 0
for images, labels in val_loader:
    total_batches += 1
    with torch.no_grad():
        output = module(images)
        # In case it outputs logits without activation
        # If it outputs activation you may have to use argmax or > 0.5 for binary case
        # Item gets float from torch.tensor
        batch_accuracy += torch.mean(labels == (output > 0.0)).item()

print("Overall accuracy: {}".format(batch_accuracy / total_batches))
-----------------------
train_indices = [0, 2, 3, 4, 5, 6, 9, 10, 12, 13, 15]
val_indices = [1, 7, 8, 11, 14]
val_indices = [1, 8, 11, 7, 14]  # Epoch 1
val_indices = [11, 7, 8, 14, 1]  # Epoch 2
val_indices = [7, 1, 14, 8, 11]  # Epoch 3
module.eval()
with torch.no_grad():
    output = module(dataset[5380])
module.eval()

total_batches = 0
batch_accuracy = 0
for images, labels in val_loader:
    total_batches += 1
    with torch.no_grad():
        output = module(images)
        # In case it outputs logits without activation
        # If it outputs activation you may have to use argmax or > 0.5 for binary case
        # Item gets float from torch.tensor
        batch_accuracy += torch.mean(labels == (output > 0.0)).item()

print("Overall accuracy: {}".format(batch_accuracy / total_batches))
-----------------------
train_indices = [0, 2, 3, 4, 5, 6, 9, 10, 12, 13, 15]
val_indices = [1, 7, 8, 11, 14]
val_indices = [1, 8, 11, 7, 14]  # Epoch 1
val_indices = [11, 7, 8, 14, 1]  # Epoch 2
val_indices = [7, 1, 14, 8, 11]  # Epoch 3
module.eval()
with torch.no_grad():
    output = module(dataset[5380])
module.eval()

total_batches = 0
batch_accuracy = 0
for images, labels in val_loader:
    total_batches += 1
    with torch.no_grad():
        output = module(images)
        # In case it outputs logits without activation
        # If it outputs activation you may have to use argmax or > 0.5 for binary case
        # Item gets float from torch.tensor
        batch_accuracy += torch.mean(labels == (output > 0.0)).item()

print("Overall accuracy: {}".format(batch_accuracy / total_batches))
-----------------------
train_indices = [0, 2, 3, 4, 5, 6, 9, 10, 12, 13, 15]
val_indices = [1, 7, 8, 11, 14]
val_indices = [1, 8, 11, 7, 14]  # Epoch 1
val_indices = [11, 7, 8, 14, 1]  # Epoch 2
val_indices = [7, 1, 14, 8, 11]  # Epoch 3
module.eval()
with torch.no_grad():
    output = module(dataset[5380])
module.eval()

total_batches = 0
batch_accuracy = 0
for images, labels in val_loader:
    total_batches += 1
    with torch.no_grad():
        output = module(images)
        # In case it outputs logits without activation
        # If it outputs activation you may have to use argmax or > 0.5 for binary case
        # Item gets float from torch.tensor
        batch_accuracy += torch.mean(labels == (output > 0.0)).item()

print("Overall accuracy: {}".format(batch_accuracy / total_batches))

TensorFlow 2-tf.keras: How to train a tf.keras multi-task network like MTCNN using tf.data API & TFRecords

copy iconCopydownload iconDownload
import tensorflow as tf

batch_size = 8

pos = tf.data.Dataset.range(0, 100)
neg = tf.data.Dataset.range(100, 200)
part_face = tf.data.Dataset.range(200, 300)
landmark = tf.data.Dataset.range(300, 400)

dataset = tf.data.experimental.sample_from_datasets(
    [pos, neg, part_face, landmark], [1/7, 3/7, 1/7, 2/7])
dataset = dataset.batch(batch_size)

# Optionally shuffle data further. Samples will already be interspersed between datasets.
# dataset = dataset.map(lambda batch: tf.random.shuffle(batch))

for elem in dataset:
  print(elem.numpy())

# Prints
[200 300 100 201 301 302 101 303]
[  0 304 202 102 203 103 305 104]
[306 307 105 204 308 205 206   1]
[207 309 106 107 310 108 311 312]
[208 209 210   2 109 211 110 212]
...

Community Discussions

Trending Discussions on mtcnn
  • No module named [mtcnn] - m1 Mac - python
  • Sorting a tensor list in ascending order
  • Mtcnn face-extractor for head extraction
  • PyTorch resnet bad tensor dimensions
  • count Yaw and Roll in Python Head pose estimation MTCNN
  • How to save the image with the red bounding boxes on it detected by mtcnn?
  • Activate Virtual environment using c#
  • Setting a protobuf item's value replaces a previously set variable as well
  • Python OpenCV LoadDatasetList, what goes into last two parameters?
  • How to store images with bounding box in a folder using python google colab?
Trending Discussions on mtcnn

QUESTION

No module named [mtcnn] - m1 Mac - python

Asked 2022-Jan-08 at 02:19

I am trying to import the module mtcnn in VSCode. I have run the following commands in the terminal:

pip3 install MTCNN

and

python3.8 -m pip install mtcnn

Which downloads MTCNN

Terminal showing its already installed

But when I try run my python file in VSCode, I run into this error: Error

I am using python version 3.8.5 in VSCode. There is no red line error under the import line in VSCode so I'm confused why it's not working.

ANSWER

Answered 2022-Jan-07 at 06:11
  1. Open an integrated terminal in VS Code, run python --version to check if it's as the same as the one you selected for python interpreter, which is shown in status bar.
  2. Run pip show mtcnn. If you get detailed information like name, version and location, reload window should solve your question. enter image description here If you get WARNING: Package(s) not found: mtcnn, it means no such module in your current selected environment, install it. enter image description here

Besides, to avoid messing up the global python environment, you may create a virtual environment. See Create a Virtual Environment.

Source https://stackoverflow.com/questions/70608201

Community Discussions, Code Snippets contain sources that include Stack Exchange Network

Vulnerabilities

No vulnerabilities reported

Install mtcnn

You can download it from GitHub.

Support

For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .

DOWNLOAD this Library from

Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases

Save this library and start creating your kit

Share this Page

share link
Reuse Pre-built Kits with mtcnn
Compare Computer Vision Libraries with Highest Support
Compare Computer Vision Libraries with Highest Security
Compare Computer Vision Libraries with Permissive License
Compare Computer Vision Libraries with Highest Reuse
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases

Save this library and start creating your kit

  • © 2022 Open Weaver Inc.