kandi background
Explore Kits

deepface | Lightweight Deep Face Recognition | Computer Vision library

 by   serengil Python Version: Current License: MIT

 by   serengil Python Version: Current License: MIT

Download this library from

kandi X-RAY | deepface Summary

deepface is a Python library typically used in Artificial Intelligence, Computer Vision, Deep Learning, Tensorflow, Keras applications. deepface has no bugs, it has no vulnerabilities, it has build file available, it has a Permissive License and it has medium support. You can install using 'pip install deepface' or download it from GitHub, PyPI.
Deepface is a lightweight face recognition and facial attribute analysis (age, gender, emotion and race) framework for python. It is a hybrid face recognition framework wrapping state-of-the-art models: VGG-Face, Google FaceNet, OpenFace, Facebook DeepFace, DeepID, ArcFace and Dlib. Those models already reached and passed the human level accuracy. The library is mainly based on TensorFlow and Keras.
Support
Support
Quality
Quality
Security
Security
License
License
Reuse
Reuse

kandi-support Support

  • deepface has a medium active ecosystem.
  • It has 2290 star(s) with 493 fork(s). There are 62 watchers for this library.
  • It had no major release in the last 12 months.
  • There are 1 open issues and 309 have been closed. On average issues are closed in 13 days. There are 1 open pull requests and 0 closed requests.
  • It has a neutral sentiment in the developer community.
  • The latest version of deepface is current.
deepface Support
Best in #Computer Vision
Average in #Computer Vision
deepface Support
Best in #Computer Vision
Average in #Computer Vision

quality kandi Quality

  • deepface has 0 bugs and 0 code smells.
deepface Quality
Best in #Computer Vision
Average in #Computer Vision
deepface Quality
Best in #Computer Vision
Average in #Computer Vision

securitySecurity

  • deepface has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
  • deepface code analysis shows 0 unresolved vulnerabilities.
  • There are 0 security hotspots that need review.
deepface Security
Best in #Computer Vision
Average in #Computer Vision
deepface Security
Best in #Computer Vision
Average in #Computer Vision

license License

  • deepface is licensed under the MIT License. This license is Permissive.
  • Permissive licenses have the least restrictions, and you can use them in most projects.
deepface License
Best in #Computer Vision
Average in #Computer Vision
deepface License
Best in #Computer Vision
Average in #Computer Vision

buildReuse

  • deepface releases are not available. You will need to build from source code and install.
  • Deployable package is available in PyPI.
  • Build file is available. You can build the component from source.
  • Installation instructions, examples and code snippets are available.
  • deepface saves you 1168 person hours of effort in developing the same functionality from scratch.
  • It has 2932 lines of code, 74 functions and 37 files.
  • It has high code complexity. Code complexity directly impacts maintainability of the code.
deepface Reuse
Best in #Computer Vision
Average in #Computer Vision
deepface Reuse
Best in #Computer Vision
Average in #Computer Vision
Top functions reviewed by kandi - BETA

kandi has reviewed deepface and discovered the below as its top functions. This is intended to give you an instant insight into deepface implemented functionality, and help decide if they suit your requirements.

  • Inception ResNet v2 .
  • Calculate face analysis .
  • Finds a VGG - based model using a given model .
  • Analyze model .
  • Verify two images .
  • Preprocess the input image .
  • Verify a VGG - compatible wrapper .
  • Detect the face of a given image .
  • representation function
  • Represent an image .

deepface Key Features

A Lightweight Face Recognition and Facial Attribute Analysis (Age, Gender, Emotion and Race) Library for Python

Installation

copy iconCopydownload iconDownload
pip install deepface

Citation

copy iconCopydownload iconDownload
@inproceedings{serengil2020lightface,
  title={LightFace: A Hybrid Deep Face Recognition Framework},
  author={Serengil, Sefik Ilkin and Ozpinar, Alper},
  booktitle={2020 Innovations in Intelligent Systems and Applications Conference (ASYU)},
  pages={23-27},
  year={2020},
  doi={10.1109/ASYU50717.2020.9259802},
  organization={IEEE}
}

OpenCv: change position of putText()

copy iconCopydownload iconDownload
import cv2

cap = cv2.VideoCapture(0)

while cap.isOpened(): 
    ret, frame = cap.read()

    result = {'dominant_emotion': 'hello', "gender": 'world', "dominant_race": 'of python'}

    font = cv2.FONT_HERSHEY_DUPLEX

    cv2.putText(frame,
               result['dominant_emotion'],
               (50, 50),
               font, 1,
               (220, 220, 220),
               2,
               cv2.LINE_4)

    cv2.putText(frame,
               result['gender'],
               (50, 80),
               font, 1,
               (220, 220, 220),
               2,
               cv2.LINE_4)

    cv2.putText(frame,
               result['dominant_race'],
               (50, 110),
               font, 1,
               (220, 220, 220),
               2,
               cv2.LINE_4)

    cv2.imshow('Facial rec.', frame)
    
    if cv2.waitKey(2) & 0xFF == ord('q'):
        break

cap.release()
cv2.destroyAllWindows()
(width, height), baseline = cv2.getTextSize(text, font, font_scale, font_thickness)
import cv2

cap = cv2.VideoCapture(0)

while cap.isOpened(): 
    ret, frame = cap.read()

    result = {'dominant_emotion': 'hello', "gender": 'world', "dominant_race": 'of python'}

    font = cv2.FONT_HERSHEY_DUPLEX
    font_scale = 1
    font_thickness = 2
    
    x = 50
    y = 50
    
    for text in result.values():
        cv2.putText(frame,
                   text,
                   (x, y),
                   font, font_scale,
                   (220, 220, 220),
                   font_thickness,
                   cv2.LINE_4)
        
        (width, height), baseline = cv2.getTextSize(text, font, font_scale, font_thickness)
        y += (height + 10)  # +10 margin

    cv2.imshow('Facial rec.', frame)
    
    if cv2.waitKey(2) & 0xFF == ord('q'):
        break

cap.release()
cv2.destroyAllWindows()
-----------------------
import cv2

cap = cv2.VideoCapture(0)

while cap.isOpened(): 
    ret, frame = cap.read()

    result = {'dominant_emotion': 'hello', "gender": 'world', "dominant_race": 'of python'}

    font = cv2.FONT_HERSHEY_DUPLEX

    cv2.putText(frame,
               result['dominant_emotion'],
               (50, 50),
               font, 1,
               (220, 220, 220),
               2,
               cv2.LINE_4)

    cv2.putText(frame,
               result['gender'],
               (50, 80),
               font, 1,
               (220, 220, 220),
               2,
               cv2.LINE_4)

    cv2.putText(frame,
               result['dominant_race'],
               (50, 110),
               font, 1,
               (220, 220, 220),
               2,
               cv2.LINE_4)

    cv2.imshow('Facial rec.', frame)
    
    if cv2.waitKey(2) & 0xFF == ord('q'):
        break

cap.release()
cv2.destroyAllWindows()
(width, height), baseline = cv2.getTextSize(text, font, font_scale, font_thickness)
import cv2

cap = cv2.VideoCapture(0)

while cap.isOpened(): 
    ret, frame = cap.read()

    result = {'dominant_emotion': 'hello', "gender": 'world', "dominant_race": 'of python'}

    font = cv2.FONT_HERSHEY_DUPLEX
    font_scale = 1
    font_thickness = 2
    
    x = 50
    y = 50
    
    for text in result.values():
        cv2.putText(frame,
                   text,
                   (x, y),
                   font, font_scale,
                   (220, 220, 220),
                   font_thickness,
                   cv2.LINE_4)
        
        (width, height), baseline = cv2.getTextSize(text, font, font_scale, font_thickness)
        y += (height + 10)  # +10 margin

    cv2.imshow('Facial rec.', frame)
    
    if cv2.waitKey(2) & 0xFF == ord('q'):
        break

cap.release()
cv2.destroyAllWindows()
-----------------------
import cv2

cap = cv2.VideoCapture(0)

while cap.isOpened(): 
    ret, frame = cap.read()

    result = {'dominant_emotion': 'hello', "gender": 'world', "dominant_race": 'of python'}

    font = cv2.FONT_HERSHEY_DUPLEX

    cv2.putText(frame,
               result['dominant_emotion'],
               (50, 50),
               font, 1,
               (220, 220, 220),
               2,
               cv2.LINE_4)

    cv2.putText(frame,
               result['gender'],
               (50, 80),
               font, 1,
               (220, 220, 220),
               2,
               cv2.LINE_4)

    cv2.putText(frame,
               result['dominant_race'],
               (50, 110),
               font, 1,
               (220, 220, 220),
               2,
               cv2.LINE_4)

    cv2.imshow('Facial rec.', frame)
    
    if cv2.waitKey(2) & 0xFF == ord('q'):
        break

cap.release()
cv2.destroyAllWindows()
(width, height), baseline = cv2.getTextSize(text, font, font_scale, font_thickness)
import cv2

cap = cv2.VideoCapture(0)

while cap.isOpened(): 
    ret, frame = cap.read()

    result = {'dominant_emotion': 'hello', "gender": 'world', "dominant_race": 'of python'}

    font = cv2.FONT_HERSHEY_DUPLEX
    font_scale = 1
    font_thickness = 2
    
    x = 50
    y = 50
    
    for text in result.values():
        cv2.putText(frame,
                   text,
                   (x, y),
                   font, font_scale,
                   (220, 220, 220),
                   font_thickness,
                   cv2.LINE_4)
        
        (width, height), baseline = cv2.getTextSize(text, font, font_scale, font_thickness)
        y += (height + 10)  # +10 margin

    cv2.imshow('Facial rec.', frame)
    
    if cv2.waitKey(2) & 0xFF == ord('q'):
        break

cap.release()
cv2.destroyAllWindows()

Can DeepFace verify() accept an image array or PIL Image object?

copy iconCopydownload iconDownload
results = DeepFace.verify(np.array(PILIMAGE), ...)
-----------------------
picture= "extracted_face_picture/single_face_picture.jpg"
picture= Image.open(picture)
.
.
df.verify(picture, np.array(frame), "Facenet")
df.verify(np.array(picture),np.array(frame), "Facenet")
-----------------------
picture= "extracted_face_picture/single_face_picture.jpg"
picture= Image.open(picture)
.
.
df.verify(picture, np.array(frame), "Facenet")
df.verify(np.array(picture),np.array(frame), "Facenet")

Tensor Tensor("flatten/Reshape:0", shape=(?, 2622), dtype=float32) is not an element of this graph

copy iconCopydownload iconDownload
from tensorflow.python.keras.backend import set_session
sess = tf.Session()

#This is a global session and graph
graph = tf.get_default_graph()
set_session(sess)


#now where you are calling the model
global sess
global graph
with graph.as_default():
    set_session(sess)
    input_descriptor = [model.predict(face), img]

Cannot set headers after they are sent to client

copy iconCopydownload iconDownload
if (!fs.existsSync(dir)){
        fs.mkdirSync(dir);
        success = false;
        message = 'Cannot detect the person.Please add name in the textbox provided below to save the person.';
        res.status(200).json({message: message,success:success});
        res.end();
var facialScript = new PythonShell('face_detect.py',options)
        facialScript.on('message',(response)=>{
          console.log(response);
          res.status(200).send(response);
          //res.end();
        
        })
-----------------------
if (!fs.existsSync(dir)){
        fs.mkdirSync(dir);
        success = false;
        message = 'Cannot detect the person.Please add name in the textbox provided below to save the person.';
        res.status(200).json({message: message,success:success});
        res.end();
var facialScript = new PythonShell('face_detect.py',options)
        facialScript.on('message',(response)=>{
          console.log(response);
          res.status(200).send(response);
          //res.end();
        
        })
-----------------------
let result="";
 facialScript.on('message',(response)=>{
         result=result+response;//assuming that response is string you can change this as per your convenience .
        
        })

facialScript.on("end",()=>{
res.send(result);
})

My OpenCV Live Webcam Demo Doesn't Show Accurate Emotions

copy iconCopydownload iconDownload
Predictions = torch.argmax(Pred)
Predictions = torch.argmax(Pred).item()
-----------------------
Predictions = torch.argmax(Pred)
Predictions = torch.argmax(Pred).item()

from numba import cuda, numpy_support and ImportError: cannot import name 'numpy_support' from 'numba'

copy iconCopydownload iconDownload
conda create -n rapids-0.17 -c rapidsai -c nvidia -c conda-forge \
    -c defaults rapids-blazing=0.17 python=3.7 cudatoolkit=10.2

Memory leakage issue in python list

copy iconCopydownload iconDownload
for i in range(len(idendities) - 1):
    for j in range(i + 1, len(idendities)):
        for cross_sample in itertools.product(samples_list[i], samples_list[j]):
            # do something ...
import csv
for i in range(0, len(idendities) - 1):
    for j in range(i + 1, len(idendities)):
        for cross_sample in itertools.product(samples_list[i], samples_list[j]):

            with open('results.csv', 'a+') as csvfile:
                writer = csv.writer(csvfile)
                writer.writerow([cross_sample[0], cross_sample[1]])
-----------------------
for i in range(len(idendities) - 1):
    for j in range(i + 1, len(idendities)):
        for cross_sample in itertools.product(samples_list[i], samples_list[j]):
            # do something ...
import csv
for i in range(0, len(idendities) - 1):
    for j in range(i + 1, len(idendities)):
        for cross_sample in itertools.product(samples_list[i], samples_list[j]):

            with open('results.csv', 'a+') as csvfile:
                writer = csv.writer(csvfile)
                writer.writerow([cross_sample[0], cross_sample[1]])
-----------------------
class ProductList:    
    def __init__(self,*data):
        self.data = data
        self.size = 1
        for d in self.data: self.size *= len(d)

    def __len__(self): return self.size
    
    def __getitem__(self,index):
        if isinstance(index,slice):
            return [*map(self.__getitem__,range(len(self))[index])]
        result = tuple()
        for d in reversed(self.data):
            index,i = divmod(index,len(d))
            result = (d[i],) + result
        return result

    def __iter__(self):
        for i in range(len(self)): yield self[i]

    def __contains__(self,value):
        return len(value) == len(self.data) \
               and all(v in d for v,d in zip(value,self.data))
    
    def index(self,value):
        index = 0
        for v,d in zip(value,self.data):
            index = index*len(d)+d.index(v)
        return index
p = ProductList(range(1234),range(1234,5678),range(5678,9101))

print(*p[:10],sep="\n")

(0, 1234, 5678)
(0, 1234, 5679)
(0, 1234, 5680)
(0, 1234, 5681)
(0, 1234, 5682)
(0, 1234, 5683)
(0, 1234, 5684)
(0, 1234, 5685)
(0, 1234, 5686)
(0, 1234, 5687)


len(p) # 18771376008

p[27]  # (2, 6, 12)

for c in p[103350956:103350960]: print(c)

(6, 4763, 5995)
(6, 4763, 5996)
(6, 4763, 5997)
(6, 4763, 5998)


p.index((6, 4763, 5995)) # 103350956
p[103350956]             # (6, 4763, 5995)

(6, 4763, 5995) in p     # True
(5995, 4763, 6) in p     # False
-----------------------
class ProductList:    
    def __init__(self,*data):
        self.data = data
        self.size = 1
        for d in self.data: self.size *= len(d)

    def __len__(self): return self.size
    
    def __getitem__(self,index):
        if isinstance(index,slice):
            return [*map(self.__getitem__,range(len(self))[index])]
        result = tuple()
        for d in reversed(self.data):
            index,i = divmod(index,len(d))
            result = (d[i],) + result
        return result

    def __iter__(self):
        for i in range(len(self)): yield self[i]

    def __contains__(self,value):
        return len(value) == len(self.data) \
               and all(v in d for v,d in zip(value,self.data))
    
    def index(self,value):
        index = 0
        for v,d in zip(value,self.data):
            index = index*len(d)+d.index(v)
        return index
p = ProductList(range(1234),range(1234,5678),range(5678,9101))

print(*p[:10],sep="\n")

(0, 1234, 5678)
(0, 1234, 5679)
(0, 1234, 5680)
(0, 1234, 5681)
(0, 1234, 5682)
(0, 1234, 5683)
(0, 1234, 5684)
(0, 1234, 5685)
(0, 1234, 5686)
(0, 1234, 5687)


len(p) # 18771376008

p[27]  # (2, 6, 12)

for c in p[103350956:103350960]: print(c)

(6, 4763, 5995)
(6, 4763, 5996)
(6, 4763, 5997)
(6, 4763, 5998)


p.index((6, 4763, 5995)) # 103350956
p[103350956]             # (6, 4763, 5995)

(6, 4763, 5995) in p     # True
(5995, 4763, 6) in p     # False
-----------------------
for i in range(0, len(idendities) - 1):
    for j in range(i + 1, len(idendities)):
        cross_product = itertools.product(samples_list[i], samples_list[j])
        cross_product = list(cross_product)

        for cross_sample in cross_product:
            negative = []
            negative.append(cross_sample[0])
            negative.append(cross_sample[1])
            negatives.append(negative)
            print(len(negatives))

negatives = pd.DataFrame(negatives, columns=["file_x", "file_y"])
negatives["decision"] = "No"
samples_list = list(identities.values())
negatives = pd.DataFrame()

    if Path("positives_negatives.csv").exists():
        df = pd.read_csv("positives_negatives.csv")
    else:
        for combo in tqdm(itertools.combinations(identities.values(), 2), desc="Negatives"):
            for cross_sample in itertools.product(combo[0], combo[1]):
                negatives = negatives.append(pd.Series({"file_x": cross_sample[0], "file_y": cross_sample[1]}).T,
                                             ignore_index=True)
        negatives["decision"] = "No"
        negatives = negatives.sample(positives.shape[0])
        df = pd.concat([positives, negatives]).reset_index(drop=True)
        df.to_csv("positives_negatives.csv", index=False)
-----------------------
for i in range(0, len(idendities) - 1):
    for j in range(i + 1, len(idendities)):
        cross_product = itertools.product(samples_list[i], samples_list[j])
        cross_product = list(cross_product)

        for cross_sample in cross_product:
            negative = []
            negative.append(cross_sample[0])
            negative.append(cross_sample[1])
            negatives.append(negative)
            print(len(negatives))

negatives = pd.DataFrame(negatives, columns=["file_x", "file_y"])
negatives["decision"] = "No"
samples_list = list(identities.values())
negatives = pd.DataFrame()

    if Path("positives_negatives.csv").exists():
        df = pd.read_csv("positives_negatives.csv")
    else:
        for combo in tqdm(itertools.combinations(identities.values(), 2), desc="Negatives"):
            for cross_sample in itertools.product(combo[0], combo[1]):
                negatives = negatives.append(pd.Series({"file_x": cross_sample[0], "file_y": cross_sample[1]}).T,
                                             ignore_index=True)
        negatives["decision"] = "No"
        negatives = negatives.sample(positives.shape[0])
        df = pd.concat([positives, negatives]).reset_index(drop=True)
        df.to_csv("positives_negatives.csv", index=False)

ValueError: unknown format is not supported : ROC Curve

copy iconCopydownload iconDownload
y_test = y_test.astype(int)
from sklearn.metrics import confusion_matrix, accuracy_score, roc_auc_score, roc_curve

y_pred_proba = predictions[::, 1]
y_test = y_test.astype(int)


fpr, tpr, _ = roc_curve(y_test, y_pred_proba)
auc = roc_auc_score(y_test, y_pred_proba)

plt.figure(figsize=(7, 3))
-----------------------
y_test = y_test.astype(int)
from sklearn.metrics import confusion_matrix, accuracy_score, roc_auc_score, roc_curve

y_pred_proba = predictions[::, 1]
y_test = y_test.astype(int)


fpr, tpr, _ = roc_curve(y_test, y_pred_proba)
auc = roc_auc_score(y_test, y_pred_proba)

plt.figure(figsize=(7, 3))

DeepFace for extracting vector information of an image

copy iconCopydownload iconDownload
!wget "http://*.jpg" -O "1.jpg"
!wget "https://*.jpg" -O "2.jpg"
import cv2
from google.colab.patches import cv2_imshow
im1 = cv2.imread("1.jpg")
#cv2.imshow("img", im1)
cv2_imshow(im1)
from deepface import DeepFace
import cv2
from google.colab.patches import cv2_imshow

#backends = ['opencv', 'ssd', 'dlib', 'mtcnn']
backends = ['mtcnn']
for backend in backends:
  #face detection and alignment
  detected_face = DeepFace.detectFace("1.jpg", detector_backend = backend)
  
  print(detected_face)
  print(detected_face.shape)

  im = cv2.cvtColor(detected_face * 255, cv2.COLOR_BGR2RGB)
  #cv2.imshow("image", im)
  cv2_imshow(im)
[[[0.12156863 0.05882353 0.02352941]
  [0.2901961  0.18039216 0.1254902 ]
  [0.3137255  0.20392157 0.14901961]
  ...
  [0.06666667 0.01176471 0.01176471]
  [0.05882353 0.01176471 0.00784314]
  [0.03921569 0.00784314 0.00392157]]

 [[0.26666668 0.2        0.16470589]
  [0.19215687 0.08235294 0.02745098]
  [0.33333334 0.22352941 0.16862746]
  ...
  [0.03921569 0.00392157 0.00392157]
  [0.04313726 0.00784314 0.00784314]
  [0.04313726 0.         0.00392157]]

 [[0.11764706 0.05098039 0.01568628]
  [0.21176471 0.10588235 0.05882353]
  [0.44313726 0.3372549  0.27058825]
  ...
  [0.02352941 0.00392157 0.        ]
  [0.02352941 0.00392157 0.        ]
  [0.02745098 0.         0.        ]]

 ...

 [[0.24313726 0.1882353  0.13725491]
  [0.24313726 0.18431373 0.13725491]
  [0.22745098 0.16470589 0.11372549]
  ...
  [0.654902   0.69803923 0.78431374]
  [0.62352943 0.67058825 0.7529412 ]
  [0.38431373 0.4117647  0.45882353]]

 [[0.23529412 0.18039216 0.12941177]
  [0.22352941 0.16862746 0.11764706]
  [0.22745098 0.16470589 0.11764706]
  ...
  [0.6392157  0.69803923 0.78039217]
  [0.6156863  0.6745098  0.75686276]
  [0.36862746 0.40392157 0.4627451 ]]

 [[0.21568628 0.16862746 0.10980392]
  [0.2        0.15294118 0.09803922]
  [0.20784314 0.14901961 0.10196079]
  ...
  [0.6313726  0.6901961  0.77254903]
  [0.6039216  0.6627451  0.74509805]
  [0.36078432 0.39607844 0.4509804 ]]]
(224, 224, 3)
"""
Modified verify function for face embedding generation
backends = ['opencv', 'ssd', 'dlib', 'mtcnn']
"""

from keras.preprocessing import image
import warnings
warnings.filterwarnings("ignore")
import time
import os
from os import path
from pathlib import Path
import gdown
import numpy as np
import pandas as pd
from tqdm import tqdm
import json
import cv2
from keras import backend as K
import keras
import tensorflow as tf
import pickle

from deepface import DeepFace
from deepface.basemodels import VGGFace, OpenFace, Facenet, FbDeepFace, DeepID
from deepface.extendedmodels import Age, Gender, Race, Emotion
from deepface.commons import functions, realtime, distance as dst


def FaceEmbeddingAndDistance(img1_path, img2_path = '', model_name ='Facenet', distance_metric = 'cosine', model = None, enforce_detection = True, detector_backend = 'mtcnn'):

  #--------------------------------
  #ensemble learning disabled.
  
  if model == None:
    if model_name == 'VGG-Face':
      print("Using VGG-Face model backend and", distance_metric,"distance.")
      model = VGGFace.loadModel()

    elif model_name == 'OpenFace':
      print("Using OpenFace model backend", distance_metric,"distance.")
      model = OpenFace.loadModel()

    elif model_name == 'Facenet':
      print("Using Facenet model backend", distance_metric,"distance.")
      model = Facenet.loadModel()

    elif model_name == 'DeepFace':
      print("Using FB DeepFace model backend", distance_metric,"distance.")
      model = FbDeepFace.loadModel()
    
    elif model_name == 'DeepID':
      print("Using DeepID2 model backend", distance_metric,"distance.")
      model = DeepID.loadModel()
    
    elif model_name == 'Dlib':
      print("Using Dlib ResNet model backend", distance_metric,"distance.")
      from deepface.basemodels.DlibResNet import DlibResNet #this is not a must because it is very huge.
      model = DlibResNet()

    else:
      raise ValueError("Invalid model_name passed - ", model_name)
  else: #model != None
    print("Already built model is passed")

  #------------------------------
  #face recognition models have different size of inputs
  #my environment returns (None, 224, 224, 3) but some people mentioned that they got [(None, 224, 224, 3)]. I think this is because of version issue.
    
  if model_name == 'Dlib': #this is not a regular keras model
    input_shape = (150, 150, 3)
  
  else: #keras based models
    input_shape = model.layers[0].input_shape
    
    if type(input_shape) == list:
      input_shape = input_shape[0][1:3]
    else:
      input_shape = input_shape[1:3]
    
  input_shape_x = input_shape[0]
  input_shape_y = input_shape[1]

  #------------------------------

  #tuned thresholds for model and metric pair
  threshold = functions.findThreshold(model_name, distance_metric)

  #------------------------------
  

  #----------------------
  #crop and align faces

  img1 = functions.preprocess_face(img=img1_path, target_size=(input_shape_y, input_shape_x), enforce_detection = enforce_detection, detector_backend = detector_backend)
  img2 = functions.preprocess_face(img=img2_path, target_size=(input_shape_y, input_shape_x), enforce_detection = enforce_detection, detector_backend = detector_backend)

  #----------------------
  #find embeddings

  img1_representation = model.predict(img1)[0,:]
  img2_representation = model.predict(img2)[0,:]

  print("FACE 1 Embedding:")
  print(img1_representation)

  print("FACE 2 Embedding:")
  print(img2_representation)

  #----------------------
  #find distances between embeddings

  if distance_metric == 'cosine':
    distance = dst.findCosineDistance(img1_representation, img2_representation)
  elif distance_metric == 'euclidean':
    distance = dst.findEuclideanDistance(img1_representation, img2_representation)
  elif distance_metric == 'euclidean_l2':
    distance = dst.findEuclideanDistance(dst.l2_normalize(img1_representation), dst.l2_normalize(img2_representation))
  else:
    raise ValueError("Invalid distance_metric passed - ", distance_metric)

  print("DISTANCE")
  print(distance)

  #----------------------
  #decision

  if distance <= threshold:
    identified =  "true"
  else:
    identified =  "false"

  print("IDENTIFIED")
  print(identified)

FaceEmbeddingAndDistance("1.jpg", "2.jpg", model_name='Facenet', detector_backend = 'mtcnn')
FACE 1 Embedding:
[-0.7229302  -1.766835   -1.5399052   0.59634393  1.203212   -1.693247
 -0.90845925  0.5264039   2.148173   -0.9786542  -0.00369854 -1.2710322
 -1.5515596  -0.4111185  -0.36896533 -0.30051672  0.35091963  0.5073533
 -1.7270111  -0.5230838   0.3376239  -1.0811361   1.5242224  -0.6137103
 -1.3100258   0.80050004 -0.7087368  -0.64483845  1.0830203   2.6056807
 -0.76527536 -0.83047277 -0.7335422  -0.01964059 -0.86749244  2.9645889
 -2.426583   -0.11157394 -2.3535717  -0.65058017  0.30864614 -0.77746457
 -0.6233895   0.44898677  2.5578005  -0.583796    0.8406945   1.1105415
 -1.652044   -0.6351479   0.07651432 -1.0454555  -1.8752071   0.50948805
 -1.6050931  -1.1769634  -0.02965304  1.5107706   0.83292925 -0.5382068
 -1.5981512  -0.6405941   0.5521577   0.22957848  0.506649    0.24680384
 -0.91464925 -0.18441322 -0.6801975  -1.0448433   0.52288735 -0.79405725
  0.5974493  -0.40668172 -0.00640235 -0.742475    0.1928863   0.31236258
 -0.37383577 -1.5883486  -1.5336255  -0.74254227 -0.8524561  -1.4625055
 -2.718953   -0.7180952  -1.2140683  -0.5232462   1.2576898  -1.1097553
  2.3971314   0.8855096  -0.16556528 -0.07307663 -1.8778017   0.8690948
 -0.39043528 -0.5494097  -2.2382076   0.7101087   0.15859437  0.2959841
  0.8605075  -0.2040207   0.77952844  0.04542177  0.92514265 -1.988945
  0.9418363   1.6509243  -0.20324889  0.2974357   0.37681833  1.095943
  1.6308782  -1.2553837  -0.10246387 -1.4697052  -0.5832107  -0.34192032
 -1.1347024   1.5154309  -0.00527111 -1.165709   -0.7296148  -0.20767921
  1.2530949  -0.9487353 ]
FACE 2 Embedding:
[ 0.9399996   1.3996615  -1.2931366   0.6869738  -0.03219241  0.96111965
  0.7378809  -0.24804354 -0.8128112   0.19901593  0.48911542 -0.91603553
 -1.1671298   0.88576627  0.25427592  1.1395477   0.45400882 -1.4845027
 -0.90582514 -1.1371222   0.47669724  1.2933927   1.4533392  -0.46943524
  0.10245587 -1.4916894  -2.3223586  -0.10979578  1.7803721   1.0051152
 -0.09164213 -0.64848715 -1.4191641   1.811776    0.73174113  0.2582223
 -0.26430857  1.7021953  -1.0571098  -1.1215096   0.3606074   1.5136883
 -0.30045512  0.26225814 -0.19101554  1.269355    1.0674374  -0.2550623
 -1.0582973   1.7474637  -1.7739134  -0.67914337 -0.1877765   1.1581128
 -2.281225    1.3955555  -1.2690883  -0.16299461  1.337664   -0.8831901
 -0.6862674   2.0526903  -0.6325836   1.333468   -0.10851342 -0.64831966
 -1.0277263   1.4572504  -0.29905424 -0.33187118 -0.54727656  1.1528811
  0.12454037 -1.5835186  -0.2271783   1.3911225   1.0170195   0.5741334
 -1.3088373  -0.5950714  -0.6856393  -0.910367   -2.0136826  -0.73777384
  0.319223   -2.1968741   0.9673934  -0.604423   -0.08049382 -1.948634
  1.88159     0.20169139  0.7295723  -1.0224706   1.2995481  -0.3402595
  1.1711328  -0.64862376  0.42063504 -0.01502114 -0.7048841   1.4360497
 -1.2988033   0.31773448  1.534014    0.98858756  1.3450235  -0.9417385
  0.26414695 -0.01988658  0.7418235  -0.04945141 -0.44838902  1.5288658
 -1.1905407   0.13961646 -0.17101136 -0.18599203 -1.9648114   0.66071814
 -0.07431012  1.5870664   1.5989372  -0.21751085  0.78908855 -1.5576671
  0.02266342  0.20999858]
DISTANCE
0.807837575674057
IDENTIFIED
false
-----------------------
!wget "http://*.jpg" -O "1.jpg"
!wget "https://*.jpg" -O "2.jpg"
import cv2
from google.colab.patches import cv2_imshow
im1 = cv2.imread("1.jpg")
#cv2.imshow("img", im1)
cv2_imshow(im1)
from deepface import DeepFace
import cv2
from google.colab.patches import cv2_imshow

#backends = ['opencv', 'ssd', 'dlib', 'mtcnn']
backends = ['mtcnn']
for backend in backends:
  #face detection and alignment
  detected_face = DeepFace.detectFace("1.jpg", detector_backend = backend)
  
  print(detected_face)
  print(detected_face.shape)

  im = cv2.cvtColor(detected_face * 255, cv2.COLOR_BGR2RGB)
  #cv2.imshow("image", im)
  cv2_imshow(im)
[[[0.12156863 0.05882353 0.02352941]
  [0.2901961  0.18039216 0.1254902 ]
  [0.3137255  0.20392157 0.14901961]
  ...
  [0.06666667 0.01176471 0.01176471]
  [0.05882353 0.01176471 0.00784314]
  [0.03921569 0.00784314 0.00392157]]

 [[0.26666668 0.2        0.16470589]
  [0.19215687 0.08235294 0.02745098]
  [0.33333334 0.22352941 0.16862746]
  ...
  [0.03921569 0.00392157 0.00392157]
  [0.04313726 0.00784314 0.00784314]
  [0.04313726 0.         0.00392157]]

 [[0.11764706 0.05098039 0.01568628]
  [0.21176471 0.10588235 0.05882353]
  [0.44313726 0.3372549  0.27058825]
  ...
  [0.02352941 0.00392157 0.        ]
  [0.02352941 0.00392157 0.        ]
  [0.02745098 0.         0.        ]]

 ...

 [[0.24313726 0.1882353  0.13725491]
  [0.24313726 0.18431373 0.13725491]
  [0.22745098 0.16470589 0.11372549]
  ...
  [0.654902   0.69803923 0.78431374]
  [0.62352943 0.67058825 0.7529412 ]
  [0.38431373 0.4117647  0.45882353]]

 [[0.23529412 0.18039216 0.12941177]
  [0.22352941 0.16862746 0.11764706]
  [0.22745098 0.16470589 0.11764706]
  ...
  [0.6392157  0.69803923 0.78039217]
  [0.6156863  0.6745098  0.75686276]
  [0.36862746 0.40392157 0.4627451 ]]

 [[0.21568628 0.16862746 0.10980392]
  [0.2        0.15294118 0.09803922]
  [0.20784314 0.14901961 0.10196079]
  ...
  [0.6313726  0.6901961  0.77254903]
  [0.6039216  0.6627451  0.74509805]
  [0.36078432 0.39607844 0.4509804 ]]]
(224, 224, 3)
"""
Modified verify function for face embedding generation
backends = ['opencv', 'ssd', 'dlib', 'mtcnn']
"""

from keras.preprocessing import image
import warnings
warnings.filterwarnings("ignore")
import time
import os
from os import path
from pathlib import Path
import gdown
import numpy as np
import pandas as pd
from tqdm import tqdm
import json
import cv2
from keras import backend as K
import keras
import tensorflow as tf
import pickle

from deepface import DeepFace
from deepface.basemodels import VGGFace, OpenFace, Facenet, FbDeepFace, DeepID
from deepface.extendedmodels import Age, Gender, Race, Emotion
from deepface.commons import functions, realtime, distance as dst


def FaceEmbeddingAndDistance(img1_path, img2_path = '', model_name ='Facenet', distance_metric = 'cosine', model = None, enforce_detection = True, detector_backend = 'mtcnn'):

  #--------------------------------
  #ensemble learning disabled.
  
  if model == None:
    if model_name == 'VGG-Face':
      print("Using VGG-Face model backend and", distance_metric,"distance.")
      model = VGGFace.loadModel()

    elif model_name == 'OpenFace':
      print("Using OpenFace model backend", distance_metric,"distance.")
      model = OpenFace.loadModel()

    elif model_name == 'Facenet':
      print("Using Facenet model backend", distance_metric,"distance.")
      model = Facenet.loadModel()

    elif model_name == 'DeepFace':
      print("Using FB DeepFace model backend", distance_metric,"distance.")
      model = FbDeepFace.loadModel()
    
    elif model_name == 'DeepID':
      print("Using DeepID2 model backend", distance_metric,"distance.")
      model = DeepID.loadModel()
    
    elif model_name == 'Dlib':
      print("Using Dlib ResNet model backend", distance_metric,"distance.")
      from deepface.basemodels.DlibResNet import DlibResNet #this is not a must because it is very huge.
      model = DlibResNet()

    else:
      raise ValueError("Invalid model_name passed - ", model_name)
  else: #model != None
    print("Already built model is passed")

  #------------------------------
  #face recognition models have different size of inputs
  #my environment returns (None, 224, 224, 3) but some people mentioned that they got [(None, 224, 224, 3)]. I think this is because of version issue.
    
  if model_name == 'Dlib': #this is not a regular keras model
    input_shape = (150, 150, 3)
  
  else: #keras based models
    input_shape = model.layers[0].input_shape
    
    if type(input_shape) == list:
      input_shape = input_shape[0][1:3]
    else:
      input_shape = input_shape[1:3]
    
  input_shape_x = input_shape[0]
  input_shape_y = input_shape[1]

  #------------------------------

  #tuned thresholds for model and metric pair
  threshold = functions.findThreshold(model_name, distance_metric)

  #------------------------------
  

  #----------------------
  #crop and align faces

  img1 = functions.preprocess_face(img=img1_path, target_size=(input_shape_y, input_shape_x), enforce_detection = enforce_detection, detector_backend = detector_backend)
  img2 = functions.preprocess_face(img=img2_path, target_size=(input_shape_y, input_shape_x), enforce_detection = enforce_detection, detector_backend = detector_backend)

  #----------------------
  #find embeddings

  img1_representation = model.predict(img1)[0,:]
  img2_representation = model.predict(img2)[0,:]

  print("FACE 1 Embedding:")
  print(img1_representation)

  print("FACE 2 Embedding:")
  print(img2_representation)

  #----------------------
  #find distances between embeddings

  if distance_metric == 'cosine':
    distance = dst.findCosineDistance(img1_representation, img2_representation)
  elif distance_metric == 'euclidean':
    distance = dst.findEuclideanDistance(img1_representation, img2_representation)
  elif distance_metric == 'euclidean_l2':
    distance = dst.findEuclideanDistance(dst.l2_normalize(img1_representation), dst.l2_normalize(img2_representation))
  else:
    raise ValueError("Invalid distance_metric passed - ", distance_metric)

  print("DISTANCE")
  print(distance)

  #----------------------
  #decision

  if distance <= threshold:
    identified =  "true"
  else:
    identified =  "false"

  print("IDENTIFIED")
  print(identified)

FaceEmbeddingAndDistance("1.jpg", "2.jpg", model_name='Facenet', detector_backend = 'mtcnn')
FACE 1 Embedding:
[-0.7229302  -1.766835   -1.5399052   0.59634393  1.203212   -1.693247
 -0.90845925  0.5264039   2.148173   -0.9786542  -0.00369854 -1.2710322
 -1.5515596  -0.4111185  -0.36896533 -0.30051672  0.35091963  0.5073533
 -1.7270111  -0.5230838   0.3376239  -1.0811361   1.5242224  -0.6137103
 -1.3100258   0.80050004 -0.7087368  -0.64483845  1.0830203   2.6056807
 -0.76527536 -0.83047277 -0.7335422  -0.01964059 -0.86749244  2.9645889
 -2.426583   -0.11157394 -2.3535717  -0.65058017  0.30864614 -0.77746457
 -0.6233895   0.44898677  2.5578005  -0.583796    0.8406945   1.1105415
 -1.652044   -0.6351479   0.07651432 -1.0454555  -1.8752071   0.50948805
 -1.6050931  -1.1769634  -0.02965304  1.5107706   0.83292925 -0.5382068
 -1.5981512  -0.6405941   0.5521577   0.22957848  0.506649    0.24680384
 -0.91464925 -0.18441322 -0.6801975  -1.0448433   0.52288735 -0.79405725
  0.5974493  -0.40668172 -0.00640235 -0.742475    0.1928863   0.31236258
 -0.37383577 -1.5883486  -1.5336255  -0.74254227 -0.8524561  -1.4625055
 -2.718953   -0.7180952  -1.2140683  -0.5232462   1.2576898  -1.1097553
  2.3971314   0.8855096  -0.16556528 -0.07307663 -1.8778017   0.8690948
 -0.39043528 -0.5494097  -2.2382076   0.7101087   0.15859437  0.2959841
  0.8605075  -0.2040207   0.77952844  0.04542177  0.92514265 -1.988945
  0.9418363   1.6509243  -0.20324889  0.2974357   0.37681833  1.095943
  1.6308782  -1.2553837  -0.10246387 -1.4697052  -0.5832107  -0.34192032
 -1.1347024   1.5154309  -0.00527111 -1.165709   -0.7296148  -0.20767921
  1.2530949  -0.9487353 ]
FACE 2 Embedding:
[ 0.9399996   1.3996615  -1.2931366   0.6869738  -0.03219241  0.96111965
  0.7378809  -0.24804354 -0.8128112   0.19901593  0.48911542 -0.91603553
 -1.1671298   0.88576627  0.25427592  1.1395477   0.45400882 -1.4845027
 -0.90582514 -1.1371222   0.47669724  1.2933927   1.4533392  -0.46943524
  0.10245587 -1.4916894  -2.3223586  -0.10979578  1.7803721   1.0051152
 -0.09164213 -0.64848715 -1.4191641   1.811776    0.73174113  0.2582223
 -0.26430857  1.7021953  -1.0571098  -1.1215096   0.3606074   1.5136883
 -0.30045512  0.26225814 -0.19101554  1.269355    1.0674374  -0.2550623
 -1.0582973   1.7474637  -1.7739134  -0.67914337 -0.1877765   1.1581128
 -2.281225    1.3955555  -1.2690883  -0.16299461  1.337664   -0.8831901
 -0.6862674   2.0526903  -0.6325836   1.333468   -0.10851342 -0.64831966
 -1.0277263   1.4572504  -0.29905424 -0.33187118 -0.54727656  1.1528811
  0.12454037 -1.5835186  -0.2271783   1.3911225   1.0170195   0.5741334
 -1.3088373  -0.5950714  -0.6856393  -0.910367   -2.0136826  -0.73777384
  0.319223   -2.1968741   0.9673934  -0.604423   -0.08049382 -1.948634
  1.88159     0.20169139  0.7295723  -1.0224706   1.2995481  -0.3402595
  1.1711328  -0.64862376  0.42063504 -0.01502114 -0.7048841   1.4360497
 -1.2988033   0.31773448  1.534014    0.98858756  1.3450235  -0.9417385
  0.26414695 -0.01988658  0.7418235  -0.04945141 -0.44838902  1.5288658
 -1.1905407   0.13961646 -0.17101136 -0.18599203 -1.9648114   0.66071814
 -0.07431012  1.5870664   1.5989372  -0.21751085  0.78908855 -1.5576671
  0.02266342  0.20999858]
DISTANCE
0.807837575674057
IDENTIFIED
false
-----------------------
!wget "http://*.jpg" -O "1.jpg"
!wget "https://*.jpg" -O "2.jpg"
import cv2
from google.colab.patches import cv2_imshow
im1 = cv2.imread("1.jpg")
#cv2.imshow("img", im1)
cv2_imshow(im1)
from deepface import DeepFace
import cv2
from google.colab.patches import cv2_imshow

#backends = ['opencv', 'ssd', 'dlib', 'mtcnn']
backends = ['mtcnn']
for backend in backends:
  #face detection and alignment
  detected_face = DeepFace.detectFace("1.jpg", detector_backend = backend)
  
  print(detected_face)
  print(detected_face.shape)

  im = cv2.cvtColor(detected_face * 255, cv2.COLOR_BGR2RGB)
  #cv2.imshow("image", im)
  cv2_imshow(im)
[[[0.12156863 0.05882353 0.02352941]
  [0.2901961  0.18039216 0.1254902 ]
  [0.3137255  0.20392157 0.14901961]
  ...
  [0.06666667 0.01176471 0.01176471]
  [0.05882353 0.01176471 0.00784314]
  [0.03921569 0.00784314 0.00392157]]

 [[0.26666668 0.2        0.16470589]
  [0.19215687 0.08235294 0.02745098]
  [0.33333334 0.22352941 0.16862746]
  ...
  [0.03921569 0.00392157 0.00392157]
  [0.04313726 0.00784314 0.00784314]
  [0.04313726 0.         0.00392157]]

 [[0.11764706 0.05098039 0.01568628]
  [0.21176471 0.10588235 0.05882353]
  [0.44313726 0.3372549  0.27058825]
  ...
  [0.02352941 0.00392157 0.        ]
  [0.02352941 0.00392157 0.        ]
  [0.02745098 0.         0.        ]]

 ...

 [[0.24313726 0.1882353  0.13725491]
  [0.24313726 0.18431373 0.13725491]
  [0.22745098 0.16470589 0.11372549]
  ...
  [0.654902   0.69803923 0.78431374]
  [0.62352943 0.67058825 0.7529412 ]
  [0.38431373 0.4117647  0.45882353]]

 [[0.23529412 0.18039216 0.12941177]
  [0.22352941 0.16862746 0.11764706]
  [0.22745098 0.16470589 0.11764706]
  ...
  [0.6392157  0.69803923 0.78039217]
  [0.6156863  0.6745098  0.75686276]
  [0.36862746 0.40392157 0.4627451 ]]

 [[0.21568628 0.16862746 0.10980392]
  [0.2        0.15294118 0.09803922]
  [0.20784314 0.14901961 0.10196079]
  ...
  [0.6313726  0.6901961  0.77254903]
  [0.6039216  0.6627451  0.74509805]
  [0.36078432 0.39607844 0.4509804 ]]]
(224, 224, 3)
"""
Modified verify function for face embedding generation
backends = ['opencv', 'ssd', 'dlib', 'mtcnn']
"""

from keras.preprocessing import image
import warnings
warnings.filterwarnings("ignore")
import time
import os
from os import path
from pathlib import Path
import gdown
import numpy as np
import pandas as pd
from tqdm import tqdm
import json
import cv2
from keras import backend as K
import keras
import tensorflow as tf
import pickle

from deepface import DeepFace
from deepface.basemodels import VGGFace, OpenFace, Facenet, FbDeepFace, DeepID
from deepface.extendedmodels import Age, Gender, Race, Emotion
from deepface.commons import functions, realtime, distance as dst


def FaceEmbeddingAndDistance(img1_path, img2_path = '', model_name ='Facenet', distance_metric = 'cosine', model = None, enforce_detection = True, detector_backend = 'mtcnn'):

  #--------------------------------
  #ensemble learning disabled.
  
  if model == None:
    if model_name == 'VGG-Face':
      print("Using VGG-Face model backend and", distance_metric,"distance.")
      model = VGGFace.loadModel()

    elif model_name == 'OpenFace':
      print("Using OpenFace model backend", distance_metric,"distance.")
      model = OpenFace.loadModel()

    elif model_name == 'Facenet':
      print("Using Facenet model backend", distance_metric,"distance.")
      model = Facenet.loadModel()

    elif model_name == 'DeepFace':
      print("Using FB DeepFace model backend", distance_metric,"distance.")
      model = FbDeepFace.loadModel()
    
    elif model_name == 'DeepID':
      print("Using DeepID2 model backend", distance_metric,"distance.")
      model = DeepID.loadModel()
    
    elif model_name == 'Dlib':
      print("Using Dlib ResNet model backend", distance_metric,"distance.")
      from deepface.basemodels.DlibResNet import DlibResNet #this is not a must because it is very huge.
      model = DlibResNet()

    else:
      raise ValueError("Invalid model_name passed - ", model_name)
  else: #model != None
    print("Already built model is passed")

  #------------------------------
  #face recognition models have different size of inputs
  #my environment returns (None, 224, 224, 3) but some people mentioned that they got [(None, 224, 224, 3)]. I think this is because of version issue.
    
  if model_name == 'Dlib': #this is not a regular keras model
    input_shape = (150, 150, 3)
  
  else: #keras based models
    input_shape = model.layers[0].input_shape
    
    if type(input_shape) == list:
      input_shape = input_shape[0][1:3]
    else:
      input_shape = input_shape[1:3]
    
  input_shape_x = input_shape[0]
  input_shape_y = input_shape[1]

  #------------------------------

  #tuned thresholds for model and metric pair
  threshold = functions.findThreshold(model_name, distance_metric)

  #------------------------------
  

  #----------------------
  #crop and align faces

  img1 = functions.preprocess_face(img=img1_path, target_size=(input_shape_y, input_shape_x), enforce_detection = enforce_detection, detector_backend = detector_backend)
  img2 = functions.preprocess_face(img=img2_path, target_size=(input_shape_y, input_shape_x), enforce_detection = enforce_detection, detector_backend = detector_backend)

  #----------------------
  #find embeddings

  img1_representation = model.predict(img1)[0,:]
  img2_representation = model.predict(img2)[0,:]

  print("FACE 1 Embedding:")
  print(img1_representation)

  print("FACE 2 Embedding:")
  print(img2_representation)

  #----------------------
  #find distances between embeddings

  if distance_metric == 'cosine':
    distance = dst.findCosineDistance(img1_representation, img2_representation)
  elif distance_metric == 'euclidean':
    distance = dst.findEuclideanDistance(img1_representation, img2_representation)
  elif distance_metric == 'euclidean_l2':
    distance = dst.findEuclideanDistance(dst.l2_normalize(img1_representation), dst.l2_normalize(img2_representation))
  else:
    raise ValueError("Invalid distance_metric passed - ", distance_metric)

  print("DISTANCE")
  print(distance)

  #----------------------
  #decision

  if distance <= threshold:
    identified =  "true"
  else:
    identified =  "false"

  print("IDENTIFIED")
  print(identified)

FaceEmbeddingAndDistance("1.jpg", "2.jpg", model_name='Facenet', detector_backend = 'mtcnn')
FACE 1 Embedding:
[-0.7229302  -1.766835   -1.5399052   0.59634393  1.203212   -1.693247
 -0.90845925  0.5264039   2.148173   -0.9786542  -0.00369854 -1.2710322
 -1.5515596  -0.4111185  -0.36896533 -0.30051672  0.35091963  0.5073533
 -1.7270111  -0.5230838   0.3376239  -1.0811361   1.5242224  -0.6137103
 -1.3100258   0.80050004 -0.7087368  -0.64483845  1.0830203   2.6056807
 -0.76527536 -0.83047277 -0.7335422  -0.01964059 -0.86749244  2.9645889
 -2.426583   -0.11157394 -2.3535717  -0.65058017  0.30864614 -0.77746457
 -0.6233895   0.44898677  2.5578005  -0.583796    0.8406945   1.1105415
 -1.652044   -0.6351479   0.07651432 -1.0454555  -1.8752071   0.50948805
 -1.6050931  -1.1769634  -0.02965304  1.5107706   0.83292925 -0.5382068
 -1.5981512  -0.6405941   0.5521577   0.22957848  0.506649    0.24680384
 -0.91464925 -0.18441322 -0.6801975  -1.0448433   0.52288735 -0.79405725
  0.5974493  -0.40668172 -0.00640235 -0.742475    0.1928863   0.31236258
 -0.37383577 -1.5883486  -1.5336255  -0.74254227 -0.8524561  -1.4625055
 -2.718953   -0.7180952  -1.2140683  -0.5232462   1.2576898  -1.1097553
  2.3971314   0.8855096  -0.16556528 -0.07307663 -1.8778017   0.8690948
 -0.39043528 -0.5494097  -2.2382076   0.7101087   0.15859437  0.2959841
  0.8605075  -0.2040207   0.77952844  0.04542177  0.92514265 -1.988945
  0.9418363   1.6509243  -0.20324889  0.2974357   0.37681833  1.095943
  1.6308782  -1.2553837  -0.10246387 -1.4697052  -0.5832107  -0.34192032
 -1.1347024   1.5154309  -0.00527111 -1.165709   -0.7296148  -0.20767921
  1.2530949  -0.9487353 ]
FACE 2 Embedding:
[ 0.9399996   1.3996615  -1.2931366   0.6869738  -0.03219241  0.96111965
  0.7378809  -0.24804354 -0.8128112   0.19901593  0.48911542 -0.91603553
 -1.1671298   0.88576627  0.25427592  1.1395477   0.45400882 -1.4845027
 -0.90582514 -1.1371222   0.47669724  1.2933927   1.4533392  -0.46943524
  0.10245587 -1.4916894  -2.3223586  -0.10979578  1.7803721   1.0051152
 -0.09164213 -0.64848715 -1.4191641   1.811776    0.73174113  0.2582223
 -0.26430857  1.7021953  -1.0571098  -1.1215096   0.3606074   1.5136883
 -0.30045512  0.26225814 -0.19101554  1.269355    1.0674374  -0.2550623
 -1.0582973   1.7474637  -1.7739134  -0.67914337 -0.1877765   1.1581128
 -2.281225    1.3955555  -1.2690883  -0.16299461  1.337664   -0.8831901
 -0.6862674   2.0526903  -0.6325836   1.333468   -0.10851342 -0.64831966
 -1.0277263   1.4572504  -0.29905424 -0.33187118 -0.54727656  1.1528811
  0.12454037 -1.5835186  -0.2271783   1.3911225   1.0170195   0.5741334
 -1.3088373  -0.5950714  -0.6856393  -0.910367   -2.0136826  -0.73777384
  0.319223   -2.1968741   0.9673934  -0.604423   -0.08049382 -1.948634
  1.88159     0.20169139  0.7295723  -1.0224706   1.2995481  -0.3402595
  1.1711328  -0.64862376  0.42063504 -0.01502114 -0.7048841   1.4360497
 -1.2988033   0.31773448  1.534014    0.98858756  1.3450235  -0.9417385
  0.26414695 -0.01988658  0.7418235  -0.04945141 -0.44838902  1.5288658
 -1.1905407   0.13961646 -0.17101136 -0.18599203 -1.9648114   0.66071814
 -0.07431012  1.5870664   1.5989372  -0.21751085  0.78908855 -1.5576671
  0.02266342  0.20999858]
DISTANCE
0.807837575674057
IDENTIFIED
false
-----------------------
!wget "http://*.jpg" -O "1.jpg"
!wget "https://*.jpg" -O "2.jpg"
import cv2
from google.colab.patches import cv2_imshow
im1 = cv2.imread("1.jpg")
#cv2.imshow("img", im1)
cv2_imshow(im1)
from deepface import DeepFace
import cv2
from google.colab.patches import cv2_imshow

#backends = ['opencv', 'ssd', 'dlib', 'mtcnn']
backends = ['mtcnn']
for backend in backends:
  #face detection and alignment
  detected_face = DeepFace.detectFace("1.jpg", detector_backend = backend)
  
  print(detected_face)
  print(detected_face.shape)

  im = cv2.cvtColor(detected_face * 255, cv2.COLOR_BGR2RGB)
  #cv2.imshow("image", im)
  cv2_imshow(im)
[[[0.12156863 0.05882353 0.02352941]
  [0.2901961  0.18039216 0.1254902 ]
  [0.3137255  0.20392157 0.14901961]
  ...
  [0.06666667 0.01176471 0.01176471]
  [0.05882353 0.01176471 0.00784314]
  [0.03921569 0.00784314 0.00392157]]

 [[0.26666668 0.2        0.16470589]
  [0.19215687 0.08235294 0.02745098]
  [0.33333334 0.22352941 0.16862746]
  ...
  [0.03921569 0.00392157 0.00392157]
  [0.04313726 0.00784314 0.00784314]
  [0.04313726 0.         0.00392157]]

 [[0.11764706 0.05098039 0.01568628]
  [0.21176471 0.10588235 0.05882353]
  [0.44313726 0.3372549  0.27058825]
  ...
  [0.02352941 0.00392157 0.        ]
  [0.02352941 0.00392157 0.        ]
  [0.02745098 0.         0.        ]]

 ...

 [[0.24313726 0.1882353  0.13725491]
  [0.24313726 0.18431373 0.13725491]
  [0.22745098 0.16470589 0.11372549]
  ...
  [0.654902   0.69803923 0.78431374]
  [0.62352943 0.67058825 0.7529412 ]
  [0.38431373 0.4117647  0.45882353]]

 [[0.23529412 0.18039216 0.12941177]
  [0.22352941 0.16862746 0.11764706]
  [0.22745098 0.16470589 0.11764706]
  ...
  [0.6392157  0.69803923 0.78039217]
  [0.6156863  0.6745098  0.75686276]
  [0.36862746 0.40392157 0.4627451 ]]

 [[0.21568628 0.16862746 0.10980392]
  [0.2        0.15294118 0.09803922]
  [0.20784314 0.14901961 0.10196079]
  ...
  [0.6313726  0.6901961  0.77254903]
  [0.6039216  0.6627451  0.74509805]
  [0.36078432 0.39607844 0.4509804 ]]]
(224, 224, 3)
"""
Modified verify function for face embedding generation
backends = ['opencv', 'ssd', 'dlib', 'mtcnn']
"""

from keras.preprocessing import image
import warnings
warnings.filterwarnings("ignore")
import time
import os
from os import path
from pathlib import Path
import gdown
import numpy as np
import pandas as pd
from tqdm import tqdm
import json
import cv2
from keras import backend as K
import keras
import tensorflow as tf
import pickle

from deepface import DeepFace
from deepface.basemodels import VGGFace, OpenFace, Facenet, FbDeepFace, DeepID
from deepface.extendedmodels import Age, Gender, Race, Emotion
from deepface.commons import functions, realtime, distance as dst


def FaceEmbeddingAndDistance(img1_path, img2_path = '', model_name ='Facenet', distance_metric = 'cosine', model = None, enforce_detection = True, detector_backend = 'mtcnn'):

  #--------------------------------
  #ensemble learning disabled.
  
  if model == None:
    if model_name == 'VGG-Face':
      print("Using VGG-Face model backend and", distance_metric,"distance.")
      model = VGGFace.loadModel()

    elif model_name == 'OpenFace':
      print("Using OpenFace model backend", distance_metric,"distance.")
      model = OpenFace.loadModel()

    elif model_name == 'Facenet':
      print("Using Facenet model backend", distance_metric,"distance.")
      model = Facenet.loadModel()

    elif model_name == 'DeepFace':
      print("Using FB DeepFace model backend", distance_metric,"distance.")
      model = FbDeepFace.loadModel()
    
    elif model_name == 'DeepID':
      print("Using DeepID2 model backend", distance_metric,"distance.")
      model = DeepID.loadModel()
    
    elif model_name == 'Dlib':
      print("Using Dlib ResNet model backend", distance_metric,"distance.")
      from deepface.basemodels.DlibResNet import DlibResNet #this is not a must because it is very huge.
      model = DlibResNet()

    else:
      raise ValueError("Invalid model_name passed - ", model_name)
  else: #model != None
    print("Already built model is passed")

  #------------------------------
  #face recognition models have different size of inputs
  #my environment returns (None, 224, 224, 3) but some people mentioned that they got [(None, 224, 224, 3)]. I think this is because of version issue.
    
  if model_name == 'Dlib': #this is not a regular keras model
    input_shape = (150, 150, 3)
  
  else: #keras based models
    input_shape = model.layers[0].input_shape
    
    if type(input_shape) == list:
      input_shape = input_shape[0][1:3]
    else:
      input_shape = input_shape[1:3]
    
  input_shape_x = input_shape[0]
  input_shape_y = input_shape[1]

  #------------------------------

  #tuned thresholds for model and metric pair
  threshold = functions.findThreshold(model_name, distance_metric)

  #------------------------------
  

  #----------------------
  #crop and align faces

  img1 = functions.preprocess_face(img=img1_path, target_size=(input_shape_y, input_shape_x), enforce_detection = enforce_detection, detector_backend = detector_backend)
  img2 = functions.preprocess_face(img=img2_path, target_size=(input_shape_y, input_shape_x), enforce_detection = enforce_detection, detector_backend = detector_backend)

  #----------------------
  #find embeddings

  img1_representation = model.predict(img1)[0,:]
  img2_representation = model.predict(img2)[0,:]

  print("FACE 1 Embedding:")
  print(img1_representation)

  print("FACE 2 Embedding:")
  print(img2_representation)

  #----------------------
  #find distances between embeddings

  if distance_metric == 'cosine':
    distance = dst.findCosineDistance(img1_representation, img2_representation)
  elif distance_metric == 'euclidean':
    distance = dst.findEuclideanDistance(img1_representation, img2_representation)
  elif distance_metric == 'euclidean_l2':
    distance = dst.findEuclideanDistance(dst.l2_normalize(img1_representation), dst.l2_normalize(img2_representation))
  else:
    raise ValueError("Invalid distance_metric passed - ", distance_metric)

  print("DISTANCE")
  print(distance)

  #----------------------
  #decision

  if distance <= threshold:
    identified =  "true"
  else:
    identified =  "false"

  print("IDENTIFIED")
  print(identified)

FaceEmbeddingAndDistance("1.jpg", "2.jpg", model_name='Facenet', detector_backend = 'mtcnn')
FACE 1 Embedding:
[-0.7229302  -1.766835   -1.5399052   0.59634393  1.203212   -1.693247
 -0.90845925  0.5264039   2.148173   -0.9786542  -0.00369854 -1.2710322
 -1.5515596  -0.4111185  -0.36896533 -0.30051672  0.35091963  0.5073533
 -1.7270111  -0.5230838   0.3376239  -1.0811361   1.5242224  -0.6137103
 -1.3100258   0.80050004 -0.7087368  -0.64483845  1.0830203   2.6056807
 -0.76527536 -0.83047277 -0.7335422  -0.01964059 -0.86749244  2.9645889
 -2.426583   -0.11157394 -2.3535717  -0.65058017  0.30864614 -0.77746457
 -0.6233895   0.44898677  2.5578005  -0.583796    0.8406945   1.1105415
 -1.652044   -0.6351479   0.07651432 -1.0454555  -1.8752071   0.50948805
 -1.6050931  -1.1769634  -0.02965304  1.5107706   0.83292925 -0.5382068
 -1.5981512  -0.6405941   0.5521577   0.22957848  0.506649    0.24680384
 -0.91464925 -0.18441322 -0.6801975  -1.0448433   0.52288735 -0.79405725
  0.5974493  -0.40668172 -0.00640235 -0.742475    0.1928863   0.31236258
 -0.37383577 -1.5883486  -1.5336255  -0.74254227 -0.8524561  -1.4625055
 -2.718953   -0.7180952  -1.2140683  -0.5232462   1.2576898  -1.1097553
  2.3971314   0.8855096  -0.16556528 -0.07307663 -1.8778017   0.8690948
 -0.39043528 -0.5494097  -2.2382076   0.7101087   0.15859437  0.2959841
  0.8605075  -0.2040207   0.77952844  0.04542177  0.92514265 -1.988945
  0.9418363   1.6509243  -0.20324889  0.2974357   0.37681833  1.095943
  1.6308782  -1.2553837  -0.10246387 -1.4697052  -0.5832107  -0.34192032
 -1.1347024   1.5154309  -0.00527111 -1.165709   -0.7296148  -0.20767921
  1.2530949  -0.9487353 ]
FACE 2 Embedding:
[ 0.9399996   1.3996615  -1.2931366   0.6869738  -0.03219241  0.96111965
  0.7378809  -0.24804354 -0.8128112   0.19901593  0.48911542 -0.91603553
 -1.1671298   0.88576627  0.25427592  1.1395477   0.45400882 -1.4845027
 -0.90582514 -1.1371222   0.47669724  1.2933927   1.4533392  -0.46943524
  0.10245587 -1.4916894  -2.3223586  -0.10979578  1.7803721   1.0051152
 -0.09164213 -0.64848715 -1.4191641   1.811776    0.73174113  0.2582223
 -0.26430857  1.7021953  -1.0571098  -1.1215096   0.3606074   1.5136883
 -0.30045512  0.26225814 -0.19101554  1.269355    1.0674374  -0.2550623
 -1.0582973   1.7474637  -1.7739134  -0.67914337 -0.1877765   1.1581128
 -2.281225    1.3955555  -1.2690883  -0.16299461  1.337664   -0.8831901
 -0.6862674   2.0526903  -0.6325836   1.333468   -0.10851342 -0.64831966
 -1.0277263   1.4572504  -0.29905424 -0.33187118 -0.54727656  1.1528811
  0.12454037 -1.5835186  -0.2271783   1.3911225   1.0170195   0.5741334
 -1.3088373  -0.5950714  -0.6856393  -0.910367   -2.0136826  -0.73777384
  0.319223   -2.1968741   0.9673934  -0.604423   -0.08049382 -1.948634
  1.88159     0.20169139  0.7295723  -1.0224706   1.2995481  -0.3402595
  1.1711328  -0.64862376  0.42063504 -0.01502114 -0.7048841   1.4360497
 -1.2988033   0.31773448  1.534014    0.98858756  1.3450235  -0.9417385
  0.26414695 -0.01988658  0.7418235  -0.04945141 -0.44838902  1.5288658
 -1.1905407   0.13961646 -0.17101136 -0.18599203 -1.9648114   0.66071814
 -0.07431012  1.5870664   1.5989372  -0.21751085  0.78908855 -1.5576671
  0.02266342  0.20999858]
DISTANCE
0.807837575674057
IDENTIFIED
false
-----------------------
!wget "http://*.jpg" -O "1.jpg"
!wget "https://*.jpg" -O "2.jpg"
import cv2
from google.colab.patches import cv2_imshow
im1 = cv2.imread("1.jpg")
#cv2.imshow("img", im1)
cv2_imshow(im1)
from deepface import DeepFace
import cv2
from google.colab.patches import cv2_imshow

#backends = ['opencv', 'ssd', 'dlib', 'mtcnn']
backends = ['mtcnn']
for backend in backends:
  #face detection and alignment
  detected_face = DeepFace.detectFace("1.jpg", detector_backend = backend)
  
  print(detected_face)
  print(detected_face.shape)

  im = cv2.cvtColor(detected_face * 255, cv2.COLOR_BGR2RGB)
  #cv2.imshow("image", im)
  cv2_imshow(im)
[[[0.12156863 0.05882353 0.02352941]
  [0.2901961  0.18039216 0.1254902 ]
  [0.3137255  0.20392157 0.14901961]
  ...
  [0.06666667 0.01176471 0.01176471]
  [0.05882353 0.01176471 0.00784314]
  [0.03921569 0.00784314 0.00392157]]

 [[0.26666668 0.2        0.16470589]
  [0.19215687 0.08235294 0.02745098]
  [0.33333334 0.22352941 0.16862746]
  ...
  [0.03921569 0.00392157 0.00392157]
  [0.04313726 0.00784314 0.00784314]
  [0.04313726 0.         0.00392157]]

 [[0.11764706 0.05098039 0.01568628]
  [0.21176471 0.10588235 0.05882353]
  [0.44313726 0.3372549  0.27058825]
  ...
  [0.02352941 0.00392157 0.        ]
  [0.02352941 0.00392157 0.        ]
  [0.02745098 0.         0.        ]]

 ...

 [[0.24313726 0.1882353  0.13725491]
  [0.24313726 0.18431373 0.13725491]
  [0.22745098 0.16470589 0.11372549]
  ...
  [0.654902   0.69803923 0.78431374]
  [0.62352943 0.67058825 0.7529412 ]
  [0.38431373 0.4117647  0.45882353]]

 [[0.23529412 0.18039216 0.12941177]
  [0.22352941 0.16862746 0.11764706]
  [0.22745098 0.16470589 0.11764706]
  ...
  [0.6392157  0.69803923 0.78039217]
  [0.6156863  0.6745098  0.75686276]
  [0.36862746 0.40392157 0.4627451 ]]

 [[0.21568628 0.16862746 0.10980392]
  [0.2        0.15294118 0.09803922]
  [0.20784314 0.14901961 0.10196079]
  ...
  [0.6313726  0.6901961  0.77254903]
  [0.6039216  0.6627451  0.74509805]
  [0.36078432 0.39607844 0.4509804 ]]]
(224, 224, 3)
"""
Modified verify function for face embedding generation
backends = ['opencv', 'ssd', 'dlib', 'mtcnn']
"""

from keras.preprocessing import image
import warnings
warnings.filterwarnings("ignore")
import time
import os
from os import path
from pathlib import Path
import gdown
import numpy as np
import pandas as pd
from tqdm import tqdm
import json
import cv2
from keras import backend as K
import keras
import tensorflow as tf
import pickle

from deepface import DeepFace
from deepface.basemodels import VGGFace, OpenFace, Facenet, FbDeepFace, DeepID
from deepface.extendedmodels import Age, Gender, Race, Emotion
from deepface.commons import functions, realtime, distance as dst


def FaceEmbeddingAndDistance(img1_path, img2_path = '', model_name ='Facenet', distance_metric = 'cosine', model = None, enforce_detection = True, detector_backend = 'mtcnn'):

  #--------------------------------
  #ensemble learning disabled.
  
  if model == None:
    if model_name == 'VGG-Face':
      print("Using VGG-Face model backend and", distance_metric,"distance.")
      model = VGGFace.loadModel()

    elif model_name == 'OpenFace':
      print("Using OpenFace model backend", distance_metric,"distance.")
      model = OpenFace.loadModel()

    elif model_name == 'Facenet':
      print("Using Facenet model backend", distance_metric,"distance.")
      model = Facenet.loadModel()

    elif model_name == 'DeepFace':
      print("Using FB DeepFace model backend", distance_metric,"distance.")
      model = FbDeepFace.loadModel()
    
    elif model_name == 'DeepID':
      print("Using DeepID2 model backend", distance_metric,"distance.")
      model = DeepID.loadModel()
    
    elif model_name == 'Dlib':
      print("Using Dlib ResNet model backend", distance_metric,"distance.")
      from deepface.basemodels.DlibResNet import DlibResNet #this is not a must because it is very huge.
      model = DlibResNet()

    else:
      raise ValueError("Invalid model_name passed - ", model_name)
  else: #model != None
    print("Already built model is passed")

  #------------------------------
  #face recognition models have different size of inputs
  #my environment returns (None, 224, 224, 3) but some people mentioned that they got [(None, 224, 224, 3)]. I think this is because of version issue.
    
  if model_name == 'Dlib': #this is not a regular keras model
    input_shape = (150, 150, 3)
  
  else: #keras based models
    input_shape = model.layers[0].input_shape
    
    if type(input_shape) == list:
      input_shape = input_shape[0][1:3]
    else:
      input_shape = input_shape[1:3]
    
  input_shape_x = input_shape[0]
  input_shape_y = input_shape[1]

  #------------------------------

  #tuned thresholds for model and metric pair
  threshold = functions.findThreshold(model_name, distance_metric)

  #------------------------------
  

  #----------------------
  #crop and align faces

  img1 = functions.preprocess_face(img=img1_path, target_size=(input_shape_y, input_shape_x), enforce_detection = enforce_detection, detector_backend = detector_backend)
  img2 = functions.preprocess_face(img=img2_path, target_size=(input_shape_y, input_shape_x), enforce_detection = enforce_detection, detector_backend = detector_backend)

  #----------------------
  #find embeddings

  img1_representation = model.predict(img1)[0,:]
  img2_representation = model.predict(img2)[0,:]

  print("FACE 1 Embedding:")
  print(img1_representation)

  print("FACE 2 Embedding:")
  print(img2_representation)

  #----------------------
  #find distances between embeddings

  if distance_metric == 'cosine':
    distance = dst.findCosineDistance(img1_representation, img2_representation)
  elif distance_metric == 'euclidean':
    distance = dst.findEuclideanDistance(img1_representation, img2_representation)
  elif distance_metric == 'euclidean_l2':
    distance = dst.findEuclideanDistance(dst.l2_normalize(img1_representation), dst.l2_normalize(img2_representation))
  else:
    raise ValueError("Invalid distance_metric passed - ", distance_metric)

  print("DISTANCE")
  print(distance)

  #----------------------
  #decision

  if distance <= threshold:
    identified =  "true"
  else:
    identified =  "false"

  print("IDENTIFIED")
  print(identified)

FaceEmbeddingAndDistance("1.jpg", "2.jpg", model_name='Facenet', detector_backend = 'mtcnn')
FACE 1 Embedding:
[-0.7229302  -1.766835   -1.5399052   0.59634393  1.203212   -1.693247
 -0.90845925  0.5264039   2.148173   -0.9786542  -0.00369854 -1.2710322
 -1.5515596  -0.4111185  -0.36896533 -0.30051672  0.35091963  0.5073533
 -1.7270111  -0.5230838   0.3376239  -1.0811361   1.5242224  -0.6137103
 -1.3100258   0.80050004 -0.7087368  -0.64483845  1.0830203   2.6056807
 -0.76527536 -0.83047277 -0.7335422  -0.01964059 -0.86749244  2.9645889
 -2.426583   -0.11157394 -2.3535717  -0.65058017  0.30864614 -0.77746457
 -0.6233895   0.44898677  2.5578005  -0.583796    0.8406945   1.1105415
 -1.652044   -0.6351479   0.07651432 -1.0454555  -1.8752071   0.50948805
 -1.6050931  -1.1769634  -0.02965304  1.5107706   0.83292925 -0.5382068
 -1.5981512  -0.6405941   0.5521577   0.22957848  0.506649    0.24680384
 -0.91464925 -0.18441322 -0.6801975  -1.0448433   0.52288735 -0.79405725
  0.5974493  -0.40668172 -0.00640235 -0.742475    0.1928863   0.31236258
 -0.37383577 -1.5883486  -1.5336255  -0.74254227 -0.8524561  -1.4625055
 -2.718953   -0.7180952  -1.2140683  -0.5232462   1.2576898  -1.1097553
  2.3971314   0.8855096  -0.16556528 -0.07307663 -1.8778017   0.8690948
 -0.39043528 -0.5494097  -2.2382076   0.7101087   0.15859437  0.2959841
  0.8605075  -0.2040207   0.77952844  0.04542177  0.92514265 -1.988945
  0.9418363   1.6509243  -0.20324889  0.2974357   0.37681833  1.095943
  1.6308782  -1.2553837  -0.10246387 -1.4697052  -0.5832107  -0.34192032
 -1.1347024   1.5154309  -0.00527111 -1.165709   -0.7296148  -0.20767921
  1.2530949  -0.9487353 ]
FACE 2 Embedding:
[ 0.9399996   1.3996615  -1.2931366   0.6869738  -0.03219241  0.96111965
  0.7378809  -0.24804354 -0.8128112   0.19901593  0.48911542 -0.91603553
 -1.1671298   0.88576627  0.25427592  1.1395477   0.45400882 -1.4845027
 -0.90582514 -1.1371222   0.47669724  1.2933927   1.4533392  -0.46943524
  0.10245587 -1.4916894  -2.3223586  -0.10979578  1.7803721   1.0051152
 -0.09164213 -0.64848715 -1.4191641   1.811776    0.73174113  0.2582223
 -0.26430857  1.7021953  -1.0571098  -1.1215096   0.3606074   1.5136883
 -0.30045512  0.26225814 -0.19101554  1.269355    1.0674374  -0.2550623
 -1.0582973   1.7474637  -1.7739134  -0.67914337 -0.1877765   1.1581128
 -2.281225    1.3955555  -1.2690883  -0.16299461  1.337664   -0.8831901
 -0.6862674   2.0526903  -0.6325836   1.333468   -0.10851342 -0.64831966
 -1.0277263   1.4572504  -0.29905424 -0.33187118 -0.54727656  1.1528811
  0.12454037 -1.5835186  -0.2271783   1.3911225   1.0170195   0.5741334
 -1.3088373  -0.5950714  -0.6856393  -0.910367   -2.0136826  -0.73777384
  0.319223   -2.1968741   0.9673934  -0.604423   -0.08049382 -1.948634
  1.88159     0.20169139  0.7295723  -1.0224706   1.2995481  -0.3402595
  1.1711328  -0.64862376  0.42063504 -0.01502114 -0.7048841   1.4360497
 -1.2988033   0.31773448  1.534014    0.98858756  1.3450235  -0.9417385
  0.26414695 -0.01988658  0.7418235  -0.04945141 -0.44838902  1.5288658
 -1.1905407   0.13961646 -0.17101136 -0.18599203 -1.9648114   0.66071814
 -0.07431012  1.5870664   1.5989372  -0.21751085  0.78908855 -1.5576671
  0.02266342  0.20999858]
DISTANCE
0.807837575674057
IDENTIFIED
false
-----------------------
!wget "http://*.jpg" -O "1.jpg"
!wget "https://*.jpg" -O "2.jpg"
import cv2
from google.colab.patches import cv2_imshow
im1 = cv2.imread("1.jpg")
#cv2.imshow("img", im1)
cv2_imshow(im1)
from deepface import DeepFace
import cv2
from google.colab.patches import cv2_imshow

#backends = ['opencv', 'ssd', 'dlib', 'mtcnn']
backends = ['mtcnn']
for backend in backends:
  #face detection and alignment
  detected_face = DeepFace.detectFace("1.jpg", detector_backend = backend)
  
  print(detected_face)
  print(detected_face.shape)

  im = cv2.cvtColor(detected_face * 255, cv2.COLOR_BGR2RGB)
  #cv2.imshow("image", im)
  cv2_imshow(im)
[[[0.12156863 0.05882353 0.02352941]
  [0.2901961  0.18039216 0.1254902 ]
  [0.3137255  0.20392157 0.14901961]
  ...
  [0.06666667 0.01176471 0.01176471]
  [0.05882353 0.01176471 0.00784314]
  [0.03921569 0.00784314 0.00392157]]

 [[0.26666668 0.2        0.16470589]
  [0.19215687 0.08235294 0.02745098]
  [0.33333334 0.22352941 0.16862746]
  ...
  [0.03921569 0.00392157 0.00392157]
  [0.04313726 0.00784314 0.00784314]
  [0.04313726 0.         0.00392157]]

 [[0.11764706 0.05098039 0.01568628]
  [0.21176471 0.10588235 0.05882353]
  [0.44313726 0.3372549  0.27058825]
  ...
  [0.02352941 0.00392157 0.        ]
  [0.02352941 0.00392157 0.        ]
  [0.02745098 0.         0.        ]]

 ...

 [[0.24313726 0.1882353  0.13725491]
  [0.24313726 0.18431373 0.13725491]
  [0.22745098 0.16470589 0.11372549]
  ...
  [0.654902   0.69803923 0.78431374]
  [0.62352943 0.67058825 0.7529412 ]
  [0.38431373 0.4117647  0.45882353]]

 [[0.23529412 0.18039216 0.12941177]
  [0.22352941 0.16862746 0.11764706]
  [0.22745098 0.16470589 0.11764706]
  ...
  [0.6392157  0.69803923 0.78039217]
  [0.6156863  0.6745098  0.75686276]
  [0.36862746 0.40392157 0.4627451 ]]

 [[0.21568628 0.16862746 0.10980392]
  [0.2        0.15294118 0.09803922]
  [0.20784314 0.14901961 0.10196079]
  ...
  [0.6313726  0.6901961  0.77254903]
  [0.6039216  0.6627451  0.74509805]
  [0.36078432 0.39607844 0.4509804 ]]]
(224, 224, 3)
"""
Modified verify function for face embedding generation
backends = ['opencv', 'ssd', 'dlib', 'mtcnn']
"""

from keras.preprocessing import image
import warnings
warnings.filterwarnings("ignore")
import time
import os
from os import path
from pathlib import Path
import gdown
import numpy as np
import pandas as pd
from tqdm import tqdm
import json
import cv2
from keras import backend as K
import keras
import tensorflow as tf
import pickle

from deepface import DeepFace
from deepface.basemodels import VGGFace, OpenFace, Facenet, FbDeepFace, DeepID
from deepface.extendedmodels import Age, Gender, Race, Emotion
from deepface.commons import functions, realtime, distance as dst


def FaceEmbeddingAndDistance(img1_path, img2_path = '', model_name ='Facenet', distance_metric = 'cosine', model = None, enforce_detection = True, detector_backend = 'mtcnn'):

  #--------------------------------
  #ensemble learning disabled.
  
  if model == None:
    if model_name == 'VGG-Face':
      print("Using VGG-Face model backend and", distance_metric,"distance.")
      model = VGGFace.loadModel()

    elif model_name == 'OpenFace':
      print("Using OpenFace model backend", distance_metric,"distance.")
      model = OpenFace.loadModel()

    elif model_name == 'Facenet':
      print("Using Facenet model backend", distance_metric,"distance.")
      model = Facenet.loadModel()

    elif model_name == 'DeepFace':
      print("Using FB DeepFace model backend", distance_metric,"distance.")
      model = FbDeepFace.loadModel()
    
    elif model_name == 'DeepID':
      print("Using DeepID2 model backend", distance_metric,"distance.")
      model = DeepID.loadModel()
    
    elif model_name == 'Dlib':
      print("Using Dlib ResNet model backend", distance_metric,"distance.")
      from deepface.basemodels.DlibResNet import DlibResNet #this is not a must because it is very huge.
      model = DlibResNet()

    else:
      raise ValueError("Invalid model_name passed - ", model_name)
  else: #model != None
    print("Already built model is passed")

  #------------------------------
  #face recognition models have different size of inputs
  #my environment returns (None, 224, 224, 3) but some people mentioned that they got [(None, 224, 224, 3)]. I think this is because of version issue.
    
  if model_name == 'Dlib': #this is not a regular keras model
    input_shape = (150, 150, 3)
  
  else: #keras based models
    input_shape = model.layers[0].input_shape
    
    if type(input_shape) == list:
      input_shape = input_shape[0][1:3]
    else:
      input_shape = input_shape[1:3]
    
  input_shape_x = input_shape[0]
  input_shape_y = input_shape[1]

  #------------------------------

  #tuned thresholds for model and metric pair
  threshold = functions.findThreshold(model_name, distance_metric)

  #------------------------------
  

  #----------------------
  #crop and align faces

  img1 = functions.preprocess_face(img=img1_path, target_size=(input_shape_y, input_shape_x), enforce_detection = enforce_detection, detector_backend = detector_backend)
  img2 = functions.preprocess_face(img=img2_path, target_size=(input_shape_y, input_shape_x), enforce_detection = enforce_detection, detector_backend = detector_backend)

  #----------------------
  #find embeddings

  img1_representation = model.predict(img1)[0,:]
  img2_representation = model.predict(img2)[0,:]

  print("FACE 1 Embedding:")
  print(img1_representation)

  print("FACE 2 Embedding:")
  print(img2_representation)

  #----------------------
  #find distances between embeddings

  if distance_metric == 'cosine':
    distance = dst.findCosineDistance(img1_representation, img2_representation)
  elif distance_metric == 'euclidean':
    distance = dst.findEuclideanDistance(img1_representation, img2_representation)
  elif distance_metric == 'euclidean_l2':
    distance = dst.findEuclideanDistance(dst.l2_normalize(img1_representation), dst.l2_normalize(img2_representation))
  else:
    raise ValueError("Invalid distance_metric passed - ", distance_metric)

  print("DISTANCE")
  print(distance)

  #----------------------
  #decision

  if distance <= threshold:
    identified =  "true"
  else:
    identified =  "false"

  print("IDENTIFIED")
  print(identified)

FaceEmbeddingAndDistance("1.jpg", "2.jpg", model_name='Facenet', detector_backend = 'mtcnn')
FACE 1 Embedding:
[-0.7229302  -1.766835   -1.5399052   0.59634393  1.203212   -1.693247
 -0.90845925  0.5264039   2.148173   -0.9786542  -0.00369854 -1.2710322
 -1.5515596  -0.4111185  -0.36896533 -0.30051672  0.35091963  0.5073533
 -1.7270111  -0.5230838   0.3376239  -1.0811361   1.5242224  -0.6137103
 -1.3100258   0.80050004 -0.7087368  -0.64483845  1.0830203   2.6056807
 -0.76527536 -0.83047277 -0.7335422  -0.01964059 -0.86749244  2.9645889
 -2.426583   -0.11157394 -2.3535717  -0.65058017  0.30864614 -0.77746457
 -0.6233895   0.44898677  2.5578005  -0.583796    0.8406945   1.1105415
 -1.652044   -0.6351479   0.07651432 -1.0454555  -1.8752071   0.50948805
 -1.6050931  -1.1769634  -0.02965304  1.5107706   0.83292925 -0.5382068
 -1.5981512  -0.6405941   0.5521577   0.22957848  0.506649    0.24680384
 -0.91464925 -0.18441322 -0.6801975  -1.0448433   0.52288735 -0.79405725
  0.5974493  -0.40668172 -0.00640235 -0.742475    0.1928863   0.31236258
 -0.37383577 -1.5883486  -1.5336255  -0.74254227 -0.8524561  -1.4625055
 -2.718953   -0.7180952  -1.2140683  -0.5232462   1.2576898  -1.1097553
  2.3971314   0.8855096  -0.16556528 -0.07307663 -1.8778017   0.8690948
 -0.39043528 -0.5494097  -2.2382076   0.7101087   0.15859437  0.2959841
  0.8605075  -0.2040207   0.77952844  0.04542177  0.92514265 -1.988945
  0.9418363   1.6509243  -0.20324889  0.2974357   0.37681833  1.095943
  1.6308782  -1.2553837  -0.10246387 -1.4697052  -0.5832107  -0.34192032
 -1.1347024   1.5154309  -0.00527111 -1.165709   -0.7296148  -0.20767921
  1.2530949  -0.9487353 ]
FACE 2 Embedding:
[ 0.9399996   1.3996615  -1.2931366   0.6869738  -0.03219241  0.96111965
  0.7378809  -0.24804354 -0.8128112   0.19901593  0.48911542 -0.91603553
 -1.1671298   0.88576627  0.25427592  1.1395477   0.45400882 -1.4845027
 -0.90582514 -1.1371222   0.47669724  1.2933927   1.4533392  -0.46943524
  0.10245587 -1.4916894  -2.3223586  -0.10979578  1.7803721   1.0051152
 -0.09164213 -0.64848715 -1.4191641   1.811776    0.73174113  0.2582223
 -0.26430857  1.7021953  -1.0571098  -1.1215096   0.3606074   1.5136883
 -0.30045512  0.26225814 -0.19101554  1.269355    1.0674374  -0.2550623
 -1.0582973   1.7474637  -1.7739134  -0.67914337 -0.1877765   1.1581128
 -2.281225    1.3955555  -1.2690883  -0.16299461  1.337664   -0.8831901
 -0.6862674   2.0526903  -0.6325836   1.333468   -0.10851342 -0.64831966
 -1.0277263   1.4572504  -0.29905424 -0.33187118 -0.54727656  1.1528811
  0.12454037 -1.5835186  -0.2271783   1.3911225   1.0170195   0.5741334
 -1.3088373  -0.5950714  -0.6856393  -0.910367   -2.0136826  -0.73777384
  0.319223   -2.1968741   0.9673934  -0.604423   -0.08049382 -1.948634
  1.88159     0.20169139  0.7295723  -1.0224706   1.2995481  -0.3402595
  1.1711328  -0.64862376  0.42063504 -0.01502114 -0.7048841   1.4360497
 -1.2988033   0.31773448  1.534014    0.98858756  1.3450235  -0.9417385
  0.26414695 -0.01988658  0.7418235  -0.04945141 -0.44838902  1.5288658
 -1.1905407   0.13961646 -0.17101136 -0.18599203 -1.9648114   0.66071814
 -0.07431012  1.5870664   1.5989372  -0.21751085  0.78908855 -1.5576671
  0.02266342  0.20999858]
DISTANCE
0.807837575674057
IDENTIFIED
false
-----------------------
!wget "http://*.jpg" -O "1.jpg"
!wget "https://*.jpg" -O "2.jpg"
import cv2
from google.colab.patches import cv2_imshow
im1 = cv2.imread("1.jpg")
#cv2.imshow("img", im1)
cv2_imshow(im1)
from deepface import DeepFace
import cv2
from google.colab.patches import cv2_imshow

#backends = ['opencv', 'ssd', 'dlib', 'mtcnn']
backends = ['mtcnn']
for backend in backends:
  #face detection and alignment
  detected_face = DeepFace.detectFace("1.jpg", detector_backend = backend)
  
  print(detected_face)
  print(detected_face.shape)

  im = cv2.cvtColor(detected_face * 255, cv2.COLOR_BGR2RGB)
  #cv2.imshow("image", im)
  cv2_imshow(im)
[[[0.12156863 0.05882353 0.02352941]
  [0.2901961  0.18039216 0.1254902 ]
  [0.3137255  0.20392157 0.14901961]
  ...
  [0.06666667 0.01176471 0.01176471]
  [0.05882353 0.01176471 0.00784314]
  [0.03921569 0.00784314 0.00392157]]

 [[0.26666668 0.2        0.16470589]
  [0.19215687 0.08235294 0.02745098]
  [0.33333334 0.22352941 0.16862746]
  ...
  [0.03921569 0.00392157 0.00392157]
  [0.04313726 0.00784314 0.00784314]
  [0.04313726 0.         0.00392157]]

 [[0.11764706 0.05098039 0.01568628]
  [0.21176471 0.10588235 0.05882353]
  [0.44313726 0.3372549  0.27058825]
  ...
  [0.02352941 0.00392157 0.        ]
  [0.02352941 0.00392157 0.        ]
  [0.02745098 0.         0.        ]]

 ...

 [[0.24313726 0.1882353  0.13725491]
  [0.24313726 0.18431373 0.13725491]
  [0.22745098 0.16470589 0.11372549]
  ...
  [0.654902   0.69803923 0.78431374]
  [0.62352943 0.67058825 0.7529412 ]
  [0.38431373 0.4117647  0.45882353]]

 [[0.23529412 0.18039216 0.12941177]
  [0.22352941 0.16862746 0.11764706]
  [0.22745098 0.16470589 0.11764706]
  ...
  [0.6392157  0.69803923 0.78039217]
  [0.6156863  0.6745098  0.75686276]
  [0.36862746 0.40392157 0.4627451 ]]

 [[0.21568628 0.16862746 0.10980392]
  [0.2        0.15294118 0.09803922]
  [0.20784314 0.14901961 0.10196079]
  ...
  [0.6313726  0.6901961  0.77254903]
  [0.6039216  0.6627451  0.74509805]
  [0.36078432 0.39607844 0.4509804 ]]]
(224, 224, 3)
"""
Modified verify function for face embedding generation
backends = ['opencv', 'ssd', 'dlib', 'mtcnn']
"""

from keras.preprocessing import image
import warnings
warnings.filterwarnings("ignore")
import time
import os
from os import path
from pathlib import Path
import gdown
import numpy as np
import pandas as pd
from tqdm import tqdm
import json
import cv2
from keras import backend as K
import keras
import tensorflow as tf
import pickle

from deepface import DeepFace
from deepface.basemodels import VGGFace, OpenFace, Facenet, FbDeepFace, DeepID
from deepface.extendedmodels import Age, Gender, Race, Emotion
from deepface.commons import functions, realtime, distance as dst


def FaceEmbeddingAndDistance(img1_path, img2_path = '', model_name ='Facenet', distance_metric = 'cosine', model = None, enforce_detection = True, detector_backend = 'mtcnn'):

  #--------------------------------
  #ensemble learning disabled.
  
  if model == None:
    if model_name == 'VGG-Face':
      print("Using VGG-Face model backend and", distance_metric,"distance.")
      model = VGGFace.loadModel()

    elif model_name == 'OpenFace':
      print("Using OpenFace model backend", distance_metric,"distance.")
      model = OpenFace.loadModel()

    elif model_name == 'Facenet':
      print("Using Facenet model backend", distance_metric,"distance.")
      model = Facenet.loadModel()

    elif model_name == 'DeepFace':
      print("Using FB DeepFace model backend", distance_metric,"distance.")
      model = FbDeepFace.loadModel()
    
    elif model_name == 'DeepID':
      print("Using DeepID2 model backend", distance_metric,"distance.")
      model = DeepID.loadModel()
    
    elif model_name == 'Dlib':
      print("Using Dlib ResNet model backend", distance_metric,"distance.")
      from deepface.basemodels.DlibResNet import DlibResNet #this is not a must because it is very huge.
      model = DlibResNet()

    else:
      raise ValueError("Invalid model_name passed - ", model_name)
  else: #model != None
    print("Already built model is passed")

  #------------------------------
  #face recognition models have different size of inputs
  #my environment returns (None, 224, 224, 3) but some people mentioned that they got [(None, 224, 224, 3)]. I think this is because of version issue.
    
  if model_name == 'Dlib': #this is not a regular keras model
    input_shape = (150, 150, 3)
  
  else: #keras based models
    input_shape = model.layers[0].input_shape
    
    if type(input_shape) == list:
      input_shape = input_shape[0][1:3]
    else:
      input_shape = input_shape[1:3]
    
  input_shape_x = input_shape[0]
  input_shape_y = input_shape[1]

  #------------------------------

  #tuned thresholds for model and metric pair
  threshold = functions.findThreshold(model_name, distance_metric)

  #------------------------------
  

  #----------------------
  #crop and align faces

  img1 = functions.preprocess_face(img=img1_path, target_size=(input_shape_y, input_shape_x), enforce_detection = enforce_detection, detector_backend = detector_backend)
  img2 = functions.preprocess_face(img=img2_path, target_size=(input_shape_y, input_shape_x), enforce_detection = enforce_detection, detector_backend = detector_backend)

  #----------------------
  #find embeddings

  img1_representation = model.predict(img1)[0,:]
  img2_representation = model.predict(img2)[0,:]

  print("FACE 1 Embedding:")
  print(img1_representation)

  print("FACE 2 Embedding:")
  print(img2_representation)

  #----------------------
  #find distances between embeddings

  if distance_metric == 'cosine':
    distance = dst.findCosineDistance(img1_representation, img2_representation)
  elif distance_metric == 'euclidean':
    distance = dst.findEuclideanDistance(img1_representation, img2_representation)
  elif distance_metric == 'euclidean_l2':
    distance = dst.findEuclideanDistance(dst.l2_normalize(img1_representation), dst.l2_normalize(img2_representation))
  else:
    raise ValueError("Invalid distance_metric passed - ", distance_metric)

  print("DISTANCE")
  print(distance)

  #----------------------
  #decision

  if distance <= threshold:
    identified =  "true"
  else:
    identified =  "false"

  print("IDENTIFIED")
  print(identified)

FaceEmbeddingAndDistance("1.jpg", "2.jpg", model_name='Facenet', detector_backend = 'mtcnn')
FACE 1 Embedding:
[-0.7229302  -1.766835   -1.5399052   0.59634393  1.203212   -1.693247
 -0.90845925  0.5264039   2.148173   -0.9786542  -0.00369854 -1.2710322
 -1.5515596  -0.4111185  -0.36896533 -0.30051672  0.35091963  0.5073533
 -1.7270111  -0.5230838   0.3376239  -1.0811361   1.5242224  -0.6137103
 -1.3100258   0.80050004 -0.7087368  -0.64483845  1.0830203   2.6056807
 -0.76527536 -0.83047277 -0.7335422  -0.01964059 -0.86749244  2.9645889
 -2.426583   -0.11157394 -2.3535717  -0.65058017  0.30864614 -0.77746457
 -0.6233895   0.44898677  2.5578005  -0.583796    0.8406945   1.1105415
 -1.652044   -0.6351479   0.07651432 -1.0454555  -1.8752071   0.50948805
 -1.6050931  -1.1769634  -0.02965304  1.5107706   0.83292925 -0.5382068
 -1.5981512  -0.6405941   0.5521577   0.22957848  0.506649    0.24680384
 -0.91464925 -0.18441322 -0.6801975  -1.0448433   0.52288735 -0.79405725
  0.5974493  -0.40668172 -0.00640235 -0.742475    0.1928863   0.31236258
 -0.37383577 -1.5883486  -1.5336255  -0.74254227 -0.8524561  -1.4625055
 -2.718953   -0.7180952  -1.2140683  -0.5232462   1.2576898  -1.1097553
  2.3971314   0.8855096  -0.16556528 -0.07307663 -1.8778017   0.8690948
 -0.39043528 -0.5494097  -2.2382076   0.7101087   0.15859437  0.2959841
  0.8605075  -0.2040207   0.77952844  0.04542177  0.92514265 -1.988945
  0.9418363   1.6509243  -0.20324889  0.2974357   0.37681833  1.095943
  1.6308782  -1.2553837  -0.10246387 -1.4697052  -0.5832107  -0.34192032
 -1.1347024   1.5154309  -0.00527111 -1.165709   -0.7296148  -0.20767921
  1.2530949  -0.9487353 ]
FACE 2 Embedding:
[ 0.9399996   1.3996615  -1.2931366   0.6869738  -0.03219241  0.96111965
  0.7378809  -0.24804354 -0.8128112   0.19901593  0.48911542 -0.91603553
 -1.1671298   0.88576627  0.25427592  1.1395477   0.45400882 -1.4845027
 -0.90582514 -1.1371222   0.47669724  1.2933927   1.4533392  -0.46943524
  0.10245587 -1.4916894  -2.3223586  -0.10979578  1.7803721   1.0051152
 -0.09164213 -0.64848715 -1.4191641   1.811776    0.73174113  0.2582223
 -0.26430857  1.7021953  -1.0571098  -1.1215096   0.3606074   1.5136883
 -0.30045512  0.26225814 -0.19101554  1.269355    1.0674374  -0.2550623
 -1.0582973   1.7474637  -1.7739134  -0.67914337 -0.1877765   1.1581128
 -2.281225    1.3955555  -1.2690883  -0.16299461  1.337664   -0.8831901
 -0.6862674   2.0526903  -0.6325836   1.333468   -0.10851342 -0.64831966
 -1.0277263   1.4572504  -0.29905424 -0.33187118 -0.54727656  1.1528811
  0.12454037 -1.5835186  -0.2271783   1.3911225   1.0170195   0.5741334
 -1.3088373  -0.5950714  -0.6856393  -0.910367   -2.0136826  -0.73777384
  0.319223   -2.1968741   0.9673934  -0.604423   -0.08049382 -1.948634
  1.88159     0.20169139  0.7295723  -1.0224706   1.2995481  -0.3402595
  1.1711328  -0.64862376  0.42063504 -0.01502114 -0.7048841   1.4360497
 -1.2988033   0.31773448  1.534014    0.98858756  1.3450235  -0.9417385
  0.26414695 -0.01988658  0.7418235  -0.04945141 -0.44838902  1.5288658
 -1.1905407   0.13961646 -0.17101136 -0.18599203 -1.9648114   0.66071814
 -0.07431012  1.5870664   1.5989372  -0.21751085  0.78908855 -1.5576671
  0.02266342  0.20999858]
DISTANCE
0.807837575674057
IDENTIFIED
false
-----------------------
!wget "http://*.jpg" -O "1.jpg"
!wget "https://*.jpg" -O "2.jpg"
import cv2
from google.colab.patches import cv2_imshow
im1 = cv2.imread("1.jpg")
#cv2.imshow("img", im1)
cv2_imshow(im1)
from deepface import DeepFace
import cv2
from google.colab.patches import cv2_imshow

#backends = ['opencv', 'ssd', 'dlib', 'mtcnn']
backends = ['mtcnn']
for backend in backends:
  #face detection and alignment
  detected_face = DeepFace.detectFace("1.jpg", detector_backend = backend)
  
  print(detected_face)
  print(detected_face.shape)

  im = cv2.cvtColor(detected_face * 255, cv2.COLOR_BGR2RGB)
  #cv2.imshow("image", im)
  cv2_imshow(im)
[[[0.12156863 0.05882353 0.02352941]
  [0.2901961  0.18039216 0.1254902 ]
  [0.3137255  0.20392157 0.14901961]
  ...
  [0.06666667 0.01176471 0.01176471]
  [0.05882353 0.01176471 0.00784314]
  [0.03921569 0.00784314 0.00392157]]

 [[0.26666668 0.2        0.16470589]
  [0.19215687 0.08235294 0.02745098]
  [0.33333334 0.22352941 0.16862746]
  ...
  [0.03921569 0.00392157 0.00392157]
  [0.04313726 0.00784314 0.00784314]
  [0.04313726 0.         0.00392157]]

 [[0.11764706 0.05098039 0.01568628]
  [0.21176471 0.10588235 0.05882353]
  [0.44313726 0.3372549  0.27058825]
  ...
  [0.02352941 0.00392157 0.        ]
  [0.02352941 0.00392157 0.        ]
  [0.02745098 0.         0.        ]]

 ...

 [[0.24313726 0.1882353  0.13725491]
  [0.24313726 0.18431373 0.13725491]
  [0.22745098 0.16470589 0.11372549]
  ...
  [0.654902   0.69803923 0.78431374]
  [0.62352943 0.67058825 0.7529412 ]
  [0.38431373 0.4117647  0.45882353]]

 [[0.23529412 0.18039216 0.12941177]
  [0.22352941 0.16862746 0.11764706]
  [0.22745098 0.16470589 0.11764706]
  ...
  [0.6392157  0.69803923 0.78039217]
  [0.6156863  0.6745098  0.75686276]
  [0.36862746 0.40392157 0.4627451 ]]

 [[0.21568628 0.16862746 0.10980392]
  [0.2        0.15294118 0.09803922]
  [0.20784314 0.14901961 0.10196079]
  ...
  [0.6313726  0.6901961  0.77254903]
  [0.6039216  0.6627451  0.74509805]
  [0.36078432 0.39607844 0.4509804 ]]]
(224, 224, 3)
"""
Modified verify function for face embedding generation
backends = ['opencv', 'ssd', 'dlib', 'mtcnn']
"""

from keras.preprocessing import image
import warnings
warnings.filterwarnings("ignore")
import time
import os
from os import path
from pathlib import Path
import gdown
import numpy as np
import pandas as pd
from tqdm import tqdm
import json
import cv2
from keras import backend as K
import keras
import tensorflow as tf
import pickle

from deepface import DeepFace
from deepface.basemodels import VGGFace, OpenFace, Facenet, FbDeepFace, DeepID
from deepface.extendedmodels import Age, Gender, Race, Emotion
from deepface.commons import functions, realtime, distance as dst


def FaceEmbeddingAndDistance(img1_path, img2_path = '', model_name ='Facenet', distance_metric = 'cosine', model = None, enforce_detection = True, detector_backend = 'mtcnn'):

  #--------------------------------
  #ensemble learning disabled.
  
  if model == None:
    if model_name == 'VGG-Face':
      print("Using VGG-Face model backend and", distance_metric,"distance.")
      model = VGGFace.loadModel()

    elif model_name == 'OpenFace':
      print("Using OpenFace model backend", distance_metric,"distance.")
      model = OpenFace.loadModel()

    elif model_name == 'Facenet':
      print("Using Facenet model backend", distance_metric,"distance.")
      model = Facenet.loadModel()

    elif model_name == 'DeepFace':
      print("Using FB DeepFace model backend", distance_metric,"distance.")
      model = FbDeepFace.loadModel()
    
    elif model_name == 'DeepID':
      print("Using DeepID2 model backend", distance_metric,"distance.")
      model = DeepID.loadModel()
    
    elif model_name == 'Dlib':
      print("Using Dlib ResNet model backend", distance_metric,"distance.")
      from deepface.basemodels.DlibResNet import DlibResNet #this is not a must because it is very huge.
      model = DlibResNet()

    else:
      raise ValueError("Invalid model_name passed - ", model_name)
  else: #model != None
    print("Already built model is passed")

  #------------------------------
  #face recognition models have different size of inputs
  #my environment returns (None, 224, 224, 3) but some people mentioned that they got [(None, 224, 224, 3)]. I think this is because of version issue.
    
  if model_name == 'Dlib': #this is not a regular keras model
    input_shape = (150, 150, 3)
  
  else: #keras based models
    input_shape = model.layers[0].input_shape
    
    if type(input_shape) == list:
      input_shape = input_shape[0][1:3]
    else:
      input_shape = input_shape[1:3]
    
  input_shape_x = input_shape[0]
  input_shape_y = input_shape[1]

  #------------------------------

  #tuned thresholds for model and metric pair
  threshold = functions.findThreshold(model_name, distance_metric)

  #------------------------------
  

  #----------------------
  #crop and align faces

  img1 = functions.preprocess_face(img=img1_path, target_size=(input_shape_y, input_shape_x), enforce_detection = enforce_detection, detector_backend = detector_backend)
  img2 = functions.preprocess_face(img=img2_path, target_size=(input_shape_y, input_shape_x), enforce_detection = enforce_detection, detector_backend = detector_backend)

  #----------------------
  #find embeddings

  img1_representation = model.predict(img1)[0,:]
  img2_representation = model.predict(img2)[0,:]

  print("FACE 1 Embedding:")
  print(img1_representation)

  print("FACE 2 Embedding:")
  print(img2_representation)

  #----------------------
  #find distances between embeddings

  if distance_metric == 'cosine':
    distance = dst.findCosineDistance(img1_representation, img2_representation)
  elif distance_metric == 'euclidean':
    distance = dst.findEuclideanDistance(img1_representation, img2_representation)
  elif distance_metric == 'euclidean_l2':
    distance = dst.findEuclideanDistance(dst.l2_normalize(img1_representation), dst.l2_normalize(img2_representation))
  else:
    raise ValueError("Invalid distance_metric passed - ", distance_metric)

  print("DISTANCE")
  print(distance)

  #----------------------
  #decision

  if distance <= threshold:
    identified =  "true"
  else:
    identified =  "false"

  print("IDENTIFIED")
  print(identified)

FaceEmbeddingAndDistance("1.jpg", "2.jpg", model_name='Facenet', detector_backend = 'mtcnn')
FACE 1 Embedding:
[-0.7229302  -1.766835   -1.5399052   0.59634393  1.203212   -1.693247
 -0.90845925  0.5264039   2.148173   -0.9786542  -0.00369854 -1.2710322
 -1.5515596  -0.4111185  -0.36896533 -0.30051672  0.35091963  0.5073533
 -1.7270111  -0.5230838   0.3376239  -1.0811361   1.5242224  -0.6137103
 -1.3100258   0.80050004 -0.7087368  -0.64483845  1.0830203   2.6056807
 -0.76527536 -0.83047277 -0.7335422  -0.01964059 -0.86749244  2.9645889
 -2.426583   -0.11157394 -2.3535717  -0.65058017  0.30864614 -0.77746457
 -0.6233895   0.44898677  2.5578005  -0.583796    0.8406945   1.1105415
 -1.652044   -0.6351479   0.07651432 -1.0454555  -1.8752071   0.50948805
 -1.6050931  -1.1769634  -0.02965304  1.5107706   0.83292925 -0.5382068
 -1.5981512  -0.6405941   0.5521577   0.22957848  0.506649    0.24680384
 -0.91464925 -0.18441322 -0.6801975  -1.0448433   0.52288735 -0.79405725
  0.5974493  -0.40668172 -0.00640235 -0.742475    0.1928863   0.31236258
 -0.37383577 -1.5883486  -1.5336255  -0.74254227 -0.8524561  -1.4625055
 -2.718953   -0.7180952  -1.2140683  -0.5232462   1.2576898  -1.1097553
  2.3971314   0.8855096  -0.16556528 -0.07307663 -1.8778017   0.8690948
 -0.39043528 -0.5494097  -2.2382076   0.7101087   0.15859437  0.2959841
  0.8605075  -0.2040207   0.77952844  0.04542177  0.92514265 -1.988945
  0.9418363   1.6509243  -0.20324889  0.2974357   0.37681833  1.095943
  1.6308782  -1.2553837  -0.10246387 -1.4697052  -0.5832107  -0.34192032
 -1.1347024   1.5154309  -0.00527111 -1.165709   -0.7296148  -0.20767921
  1.2530949  -0.9487353 ]
FACE 2 Embedding:
[ 0.9399996   1.3996615  -1.2931366   0.6869738  -0.03219241  0.96111965
  0.7378809  -0.24804354 -0.8128112   0.19901593  0.48911542 -0.91603553
 -1.1671298   0.88576627  0.25427592  1.1395477   0.45400882 -1.4845027
 -0.90582514 -1.1371222   0.47669724  1.2933927   1.4533392  -0.46943524
  0.10245587 -1.4916894  -2.3223586  -0.10979578  1.7803721   1.0051152
 -0.09164213 -0.64848715 -1.4191641   1.811776    0.73174113  0.2582223
 -0.26430857  1.7021953  -1.0571098  -1.1215096   0.3606074   1.5136883
 -0.30045512  0.26225814 -0.19101554  1.269355    1.0674374  -0.2550623
 -1.0582973   1.7474637  -1.7739134  -0.67914337 -0.1877765   1.1581128
 -2.281225    1.3955555  -1.2690883  -0.16299461  1.337664   -0.8831901
 -0.6862674   2.0526903  -0.6325836   1.333468   -0.10851342 -0.64831966
 -1.0277263   1.4572504  -0.29905424 -0.33187118 -0.54727656  1.1528811
  0.12454037 -1.5835186  -0.2271783   1.3911225   1.0170195   0.5741334
 -1.3088373  -0.5950714  -0.6856393  -0.910367   -2.0136826  -0.73777384
  0.319223   -2.1968741   0.9673934  -0.604423   -0.08049382 -1.948634
  1.88159     0.20169139  0.7295723  -1.0224706   1.2995481  -0.3402595
  1.1711328  -0.64862376  0.42063504 -0.01502114 -0.7048841   1.4360497
 -1.2988033   0.31773448  1.534014    0.98858756  1.3450235  -0.9417385
  0.26414695 -0.01988658  0.7418235  -0.04945141 -0.44838902  1.5288658
 -1.1905407   0.13961646 -0.17101136 -0.18599203 -1.9648114   0.66071814
 -0.07431012  1.5870664   1.5989372  -0.21751085  0.78908855 -1.5576671
  0.02266342  0.20999858]
DISTANCE
0.807837575674057
IDENTIFIED
false
-----------------------
from deepface import DeepFace
from deepface.commons import functions

models = ['VGG-Face', 'Facenet', 'OpenFace', 'DeepFace', 'DeepID', 'Dlib']
model = DeepFace.build_model(models[0])
target_size = model.layers[0].input_shape

img1_path = "img1.jpg"
img2_path = "img2.jpg"

#detect and align 
img1 = functions.preprocess_face(img1_path, target_size = target_size)
img2 = functions.preprocess_face(img2_path, target_size = target_size)

#find vector embeddings
img1_embedding = model.predict(img1)
img2_embedding = model.predict(img2)

How to execute docker health check only one time?

copy iconCopydownload iconDownload
HEALTHCHECK CMD sh -c "if [ ! -f /tmp/health.txt ]; then touch /tmp/health.txt && python api/initRequest.py || exit 0 ; else echo \"initRequest.py already executed\"; fi"

Community Discussions

Trending Discussions on deepface
  • Bounding boxes returned without detected face image in dlib python
  • PermissionError: [Errno 13] Permission denied: .deepface
  • OpenCv: change position of putText()
  • Can DeepFace verify() accept an image array or PIL Image object?
  • Tensor Tensor(&quot;flatten/Reshape:0&quot;, shape=(?, 2622), dtype=float32) is not an element of this graph
  • Cannot set headers after they are sent to client
  • My OpenCV Live Webcam Demo Doesn't Show Accurate Emotions
  • from numba import cuda, numpy_support and ImportError: cannot import name 'numpy_support' from 'numba'
  • Memory leakage issue in python list
  • ValueError: unknown format is not supported : ROC Curve
Trending Discussions on deepface

QUESTION

Bounding boxes returned without detected face image in dlib python

Asked 2022-Jan-18 at 13:43

I'm trying to detect multiple faces in a picture using the deepface library with dlib as the backend detector. I'm using the DlibWrapper.py from the deeepface library and i have the following issue: In some cases, the detector returns the bounding box coordinates but doesn't return the detected face image detected face-box coordinates.

I was wondering if this bug occurs because of the negative values of some coordinates of the bounding boxes but i figured out that was not the case, as the negative values are features, not bugs. Here is the DlibWrapper from the deepface library.

ANSWER

Answered 2022-Jan-18 at 13:43

Solved!There are edge cases where original rectangle is partially outside the image window. That happens with dlib. So, instead of

  • detected_face = img[top:bottom, left:right],

the detected face should be

  • detected_face = img[max(0, top): min(bottom, img_height), max(0, left): min(right, img_width)]

Source https://stackoverflow.com/questions/70754730

Community Discussions, Code Snippets contain sources that include Stack Exchange Network

Vulnerabilities

No vulnerabilities reported

Install deepface

The easiest way to install deepface is to download it from PyPI. It's going to install the library itself and its prerequisites as well. Then you will be able to import the library and use its functionalities. Facial Recognition - Demo. A modern face recognition pipeline consists of 4 common stages: detect, align, represent and verify. Deepface handles all these common stages in the background. You can just call its verification, find or analysis function with a single line of code. Face Verification - Demo. This function verifies face pairs as same person or different persons. It expects exact image paths as inputs. Passing numpy or based64 encoded images is also welcome. Face recognition - Demo. Face recognition requires applying face verification many times. Herein, deepface has an out-of-the-box find function to handle this action. It's going to look for the identity of input image in the database path and it will return pandas data frame as output. Face recognition models - Demo. Deepface is a hybrid face recognition package. It currently wraps many state-of-the-art face recognition models: VGG-Face , Google FaceNet, OpenFace, Facebook DeepFace, DeepID, ArcFace and Dlib. The default configuration uses VGG-Face model. FaceNet, VGG-Face, ArcFace and Dlib overperforms than OpenFace, DeepFace and DeepID based on experiments. Supportively, FaceNet /w 512d got 99.65%; FaceNet /w 128d got 99.2%; ArcFace got 99.41%; Dlib got 99.38%; VGG-Face got 98.78%; DeepID got 97.05; OpenFace got 93.80% accuracy scores on LFW data set whereas human beings could have just 97.53%. Face recognition models are regular convolutional neural networks and they are responsible to represent faces as vectors. We expect that a face pair of same person should be more similar than a face pair of different persons. Similarity could be calculated by different metrics such as Cosine Similarity, Euclidean Distance and L2 form. The default configuration uses cosine similarity. Euclidean L2 form seems to be more stable than cosine and regular Euclidean distance based on experiments. Facial Attribute Analysis - Demo. Deepface also comes with a strong facial attribute analysis module including age, gender, facial expression (including angry, fear, neutral, sad, disgust, happy and surprise) and race (including asian, white, middle eastern, indian, latino and black) predictions. Age model got ± 4.65 MAE; gender model got 97.44% accuracy, 96.29% precision and 95.05% recall as mentioned in its tutorial. Streaming and Real Time Analysis - Demo. You can run deepface for real time videos as well. Stream function will access your webcam and apply both face recognition and facial attribute analysis. The function starts to analyze a frame if it can focus a face sequantially 5 frames. Then, it shows results 5 seconds.

Support

Pull requests are welcome. You should run the unit tests locally by running test/unit_tests.py. Please share the unit test result logs in the PR. Deepface is currently compatible with TF 1 and 2 versions. Change requests should satisfy those requirements both.

DOWNLOAD this Library from

Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases

Save this library and start creating your kit

Share this Page

share link
Reuse Pre-built Kits with deepface
Compare Computer Vision Libraries with Highest Support
Compare Computer Vision Libraries with Highest Security
Compare Computer Vision Libraries with Permissive License
Compare Computer Vision Libraries with Highest Reuse
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases

Save this library and start creating your kit

  • © 2022 Open Weaver Inc.