dlib | making real world machine learning and data analysis | Machine Learning library

 by   davisking C++ Version: 19.24.1 License: BSL-1.0

kandi X-RAY | dlib Summary

dlib is a C++ library typically used in Artificial Intelligence, Machine Learning, Deep Learning, OpenCV applications. dlib has no bugs, it has no vulnerabilities, it has a Permissive License and it has medium support. You can download it from GitHub.
Dlib is a modern C++ toolkit containing machine learning algorithms and tools for creating complex software in C++ to solve real world problems. See for the main project documentation and API reference.
    Support
      Quality
        Security
          License
            Reuse
            Support
              Quality
                Security
                  License
                    Reuse

                      kandi-support Support

                        summary
                        dlib has a medium active ecosystem.
                        summary
                        It has 11833 star(s) with 3192 fork(s). There are 474 watchers for this library.
                        summary
                        There were 1 major release(s) in the last 6 months.
                        summary
                        There are 30 open issues and 2085 have been closed. On average issues are closed in 30 days. There are 5 open pull requests and 0 closed requests.
                        summary
                        It has a neutral sentiment in the developer community.
                        summary
                        The latest version of dlib is 19.24.1
                        dlib Support
                          Best in #Machine Learning
                            Average in #Machine Learning
                            dlib Support
                              Best in #Machine Learning
                                Average in #Machine Learning

                                  kandi-Quality Quality

                                    summary
                                    dlib has 0 bugs and 0 code smells.
                                    dlib Quality
                                      Best in #Machine Learning
                                        Average in #Machine Learning
                                        dlib Quality
                                          Best in #Machine Learning
                                            Average in #Machine Learning

                                              kandi-Security Security

                                                summary
                                                dlib has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
                                                summary
                                                dlib code analysis shows 0 unresolved vulnerabilities.
                                                summary
                                                There are 0 security hotspots that need review.
                                                dlib Security
                                                  Best in #Machine Learning
                                                    Average in #Machine Learning
                                                    dlib Security
                                                      Best in #Machine Learning
                                                        Average in #Machine Learning

                                                          kandi-License License

                                                            summary
                                                            dlib is licensed under the BSL-1.0 License. This license is Permissive.
                                                            summary
                                                            Permissive licenses have the least restrictions, and you can use them in most projects.
                                                            dlib License
                                                              Best in #Machine Learning
                                                                Average in #Machine Learning
                                                                dlib License
                                                                  Best in #Machine Learning
                                                                    Average in #Machine Learning

                                                                      kandi-Reuse Reuse

                                                                        summary
                                                                        dlib releases are available to install and integrate.
                                                                        summary
                                                                        Installation instructions are not available. Examples and code snippets are available.
                                                                        dlib Reuse
                                                                          Best in #Machine Learning
                                                                            Average in #Machine Learning
                                                                            dlib Reuse
                                                                              Best in #Machine Learning
                                                                                Average in #Machine Learning
                                                                                  Top functions reviewed by kandi - BETA
                                                                                  kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.
                                                                                  Currently covering the most popular Java, JavaScript and Python libraries. See a Sample Here
                                                                                  Get all kandi verified functions for this library.
                                                                                  Get all kandi verified functions for this library.

                                                                                  dlib Key Features

                                                                                  A toolkit for making real world machine learning and data analysis applications in C++

                                                                                  dlib Examples and Code Snippets

                                                                                  is possible to face recognition with mediapipe in python
                                                                                  Pythondot imgLines of Code : 2dot imgLicense : Strong Copyleft (CC BY-SA 4.0)
                                                                                  copy iconCopy
                                                                                  locations = face_recognition.face_locations(frame, model="hog")
                                                                                  
                                                                                  Why such different answers for the xgboost scikit-learn interface?
                                                                                  Pythondot imgLines of Code : 8dot imgLicense : Strong Copyleft (CC BY-SA 4.0)
                                                                                  copy iconCopy
                                                                                  cross_val_score(xgbr, X, y, cv=5, scoring = 'neg_root_mean_squared_error')
                                                                                  
                                                                                  from sklearn.model_selection import KFold
                                                                                  
                                                                                  cv_results =  xgb.cv(dtrain=dmatrix, params=params, metrics={'rmse'}, folds = KFold(n_splits=5))
                                                                                  
                                                                                  cv_results =  xgb.cv(dtrain=dmatrix, params=params, metrics={'rmse'}, folds = KFold(n_splits=5), nrounds = 100)
                                                                                  
                                                                                  Mediapipe Crop Images
                                                                                  Pythondot imgLines of Code : 24dot imgLicense : Strong Copyleft (CC BY-SA 4.0)
                                                                                  copy iconCopy
                                                                                  import dlib
                                                                                  from PIL import Image
                                                                                  from skimage import io
                                                                                  
                                                                                  h, w, c = sample_img.shape
                                                                                  print('width:  ', w)
                                                                                  print('height: ', h)
                                                                                  
                                                                                  xleft = data.xmin*w
                                                                                  xleft = int(xleft)
                                                                                  xtop = data.ymin*h
                                                                                  xtop = int(xtop)
                                                                                  xright = data.width*w + xleft
                                                                                  xright = int(xright)
                                                                                  xbottom = data.height*h + xtop
                                                                                  xbottom = int(xbottom)
                                                                                  
                                                                                  detected_faces = [(xleft, xtop, xright, xbottom)]
                                                                                  
                                                                                  for n, face_rect in enumerate(detected_faces):
                                                                                      face = Image.fromarray(image_c).crop(face_rect)
                                                                                      face_np = np.asarray(face)
                                                                                      plt.imshow(face_np)
                                                                                  
                                                                                  "ERROR: CMake must be installed to build dlib" when installing face_recognition
                                                                                  Pythondot imgLines of Code : 2dot imgLicense : Strong Copyleft (CC BY-SA 4.0)
                                                                                  copy iconCopy
                                                                                  ERROR: CMake must be installed to build dlib
                                                                                  
                                                                                  Sorting a tensor list in ascending order
                                                                                  Pythondot imgLines of Code : 8dot imgLicense : Strong Copyleft (CC BY-SA 4.0)
                                                                                  copy iconCopy
                                                                                  namestodistance = [('Alice', .1), ('Bob', .3), ('Carrie', .2)]
                                                                                  names_top = sorted(namestodistance, key=lambda x: x[1])
                                                                                  print(names_top[:2])
                                                                                  
                                                                                  namestodistance = list(map(lambda x: (x[0], x[1].item()), namestodistance)
                                                                                  names_top = sorted(namestodistance, key=lambda x: x[1])
                                                                                  print(names_top[:2])
                                                                                  
                                                                                  My server is not responding, it is hanging up, did not returning anything
                                                                                  Pythondot imgLines of Code : 2dot imgLicense : Strong Copyleft (CC BY-SA 4.0)
                                                                                  copy iconCopy
                                                                                  CMD python3 manage.py migrate && python3 manage.py runserver 0.0.0.0:8000
                                                                                  
                                                                                  How can I fix Face Recognition operands error?
                                                                                  Pythondot imgLines of Code : 2dot imgLicense : Strong Copyleft (CC BY-SA 4.0)
                                                                                  copy iconCopy
                                                                                  matches = face_recognition.compare_faces(known_face_encodings, face_encoding)
                                                                                  
                                                                                  TypeError: __init__(): incompatible constructor arguments with face_recognition call
                                                                                  Pythondot imgLines of Code : 3dot imgLicense : Strong Copyleft (CC BY-SA 4.0)
                                                                                  copy iconCopy
                                                                                  for (top, right, bottom, left), name in zip(faceLocations, faceNames):
                                                                                     cv2.rectangle(img, int(left)-20, int(top)-20, int(right)+20, int(bottom)+20), (255, 0, 0), cv2.FILLED)
                                                                                  
                                                                                  Head Pose Estimation Using Facial Landmarks
                                                                                  Pythondot imgLines of Code : 18dot imgLicense : Strong Copyleft (CC BY-SA 4.0)
                                                                                  copy iconCopy
                                                                                  face3Dmodel = np.array([
                                                                                      (0.0, 0.0, 0.0),            # Nose tip
                                                                                      (0.0, -330.0, -65.0),       # Chin
                                                                                      (-225.0, 170.0, -135.0),    # Left eye left corner
                                                                                      (225.0, 170.0, -135.0),     # Right eye right corner
                                                                                      (-150.0, -150.0, -125.0),   # Left Mouth corner
                                                                                      (150.0, -150.0, -125.0)     # Right mouth corner
                                                                                      ], dtype=np.float64)
                                                                                  
                                                                                              image_points = np.array([
                                                                                                  faceXY[1],      # "nose"
                                                                                                  faceXY[152],    # "chin"
                                                                                                  faceXY[226],    # "left eye"
                                                                                                  faceXY[446],    # "right eye"
                                                                                                  faceXY[57],     # "left mouth"
                                                                                                  faceXY[287]     # "right mouth"
                                                                                              ], dtype="double")
                                                                                  
                                                                                  OpenCV(4.1.0) error: (-215:Assertion failed) y0 - 6 * scale >= 0 && y0 + 6 * scale < Lx.rows
                                                                                  Pythondot imgLines of Code : 2dot imgLicense : Strong Copyleft (CC BY-SA 4.0)
                                                                                  copy iconCopy
                                                                                  akaze = cv2.AKAZE_create()
                                                                                  
                                                                                  Community Discussions

                                                                                  Trending Discussions on dlib

                                                                                  Cannot install face_recognition using pip install face_recognition on Win10
                                                                                  chevron right
                                                                                  how to extract self-defined ROI with dlib facelandmarks?
                                                                                  chevron right
                                                                                  Why such different answers for the xgboost scikit-learn interface?
                                                                                  chevron right
                                                                                  Recreating Global 3D Points, from Local 3D Points and Global 2D Points; SolvePnP
                                                                                  chevron right
                                                                                  "ERROR: CMake must be installed to build dlib" when installing face_recognition
                                                                                  chevron right
                                                                                  dlib import failure on M1 pro
                                                                                  chevron right
                                                                                  Bounding boxes returned without detected face image in dlib python
                                                                                  chevron right
                                                                                  Overloading member operator,?
                                                                                  chevron right
                                                                                  Sorting a tensor list in ascending order
                                                                                  chevron right
                                                                                  How to shrink/expand facial features using Opencv?
                                                                                  chevron right

                                                                                  QUESTION

                                                                                  Cannot install face_recognition using pip install face_recognition on Win10
                                                                                  Asked 2022-Apr-04 at 09:48

                                                                                  I get the error

                                                                                  Running setup.py install for dlib ... error error: subprocess-exited-with-error

                                                                                  × Running setup.py install for dlib did not run successfully. │ exit code: 1 ╰─> [58 lines of output] running install running build running build_py package init file 'tools\python\dlib_init_.py' not found (or not a regular file) running build_ext Building extension for Python 3.10.4 (tags/v3.10.4:9d38120, Mar 23 2022, 23:13:41) [MSC v.1929 64 bit (AMD64)] Invoking CMake setup: 'cmake C:\Users\amade\AppData\Local\Temp\pip-install-_k5e982w\dlib_237006073dfd4b13993bf60b7ecb3629\tools\python -DCMAKE_LIBRARY_OUTPUT_DIRECTORY=C:\Users\amade\AppData\Local\Temp\pip-install-_k5e982w\dlib_237006073dfd4b13993bf60b7ecb3629\build\lib.win-amd64-3.10 -DPYTHON_EXECUTABLE=C:\Users\amade\AppData\Local\Microsoft\WindowsApps\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\python.exe -DCMAKE_LIBRARY_OUTPUT_DIRECTORY_RELEASE=C:\Users\amade\AppData\Local\Temp\pip-install-_k5e982w\dlib_237006073dfd4b13993bf60b7ecb3629\build\lib.win-amd64-3.10 -A x64' -- Building for: Visual Studio 17 2022 -- Selecting Windows SDK version to target Windows 10.0.19044. -- The C compiler identification is unknown -- The CXX compiler identification is unknown CMake Error at CMakeLists.txt:14 (project): No CMAKE_C_COMPILER could be found.

                                                                                    CMake Error at CMakeLists.txt:14 (project):
                                                                                      No CMAKE_CXX_COMPILER could be found.
                                                                                  
                                                                                  
                                                                                  
                                                                                    -- Configuring incomplete, errors occurred!
                                                                                    See also "C:/Users/amade/AppData/Local/Temp/pip-install-_k5e982w/dlib_237006073dfd4b13993bf60b7ecb3629/build/temp.win-amd64-3.10/Release/CMakeFiles/CMakeOutput.log".
                                                                                    See also "C:/Users/amade/AppData/Local/Temp/pip-install-_k5e982w/dlib_237006073dfd4b13993bf60b7ecb3629/build/temp.win-amd64-3.10/Release/CMakeFiles/CMakeError.log".
                                                                                    Traceback (most recent call last):
                                                                                      File "", line 2, in 
                                                                                      File "", line 34, in 
                                                                                      File "C:\Users\amade\AppData\Local\Temp\pip-install-_k5e982w\dlib_237006073dfd4b13993bf60b7ecb3629\setup.py", line 222, in 
                                                                                        setup(
                                                                                      File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.1264.0_x64__qbz5n2kfra8p0\lib\site-packages\setuptools\__init__.py", line 153, in setup
                                                                                        return distutils.core.setup(**attrs)
                                                                                      File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.1264.0_x64__qbz5n2kfra8p0\lib\distutils\core.py", line 148, in setup
                                                                                        dist.run_commands()
                                                                                      File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.1264.0_x64__qbz5n2kfra8p0\lib\distutils\dist.py", line 966, in run_commands
                                                                                        self.run_command(cmd)
                                                                                      File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.1264.0_x64__qbz5n2kfra8p0\lib\distutils\dist.py", line 985, in run_command
                                                                                        cmd_obj.run()
                                                                                      File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.1264.0_x64__qbz5n2kfra8p0\lib\site-packages\setuptools\command\install.py", line 61, in run
                                                                                        return orig.install.run(self)
                                                                                      File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.1264.0_x64__qbz5n2kfra8p0\lib\distutils\command\install.py", line 568, in run
                                                                                        self.run_command('build')
                                                                                      File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.1264.0_x64__qbz5n2kfra8p0\lib\distutils\cmd.py", line 313, in run_command
                                                                                        self.distribution.run_command(command)
                                                                                      File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.1264.0_x64__qbz5n2kfra8p0\lib\distutils\dist.py", line 985, in run_command
                                                                                        cmd_obj.run()
                                                                                      File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.1264.0_x64__qbz5n2kfra8p0\lib\distutils\command\build.py", line 135, in run
                                                                                        self.run_command(cmd_name)
                                                                                      File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.1264.0_x64__qbz5n2kfra8p0\lib\distutils\cmd.py", line 313, in run_command
                                                                                        self.distribution.run_command(command)
                                                                                      File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.1264.0_x64__qbz5n2kfra8p0\lib\distutils\dist.py", line 985, in run_command
                                                                                        cmd_obj.run()
                                                                                      File "C:\Users\amade\AppData\Local\Temp\pip-install-_k5e982w\dlib_237006073dfd4b13993bf60b7ecb3629\setup.py", line 134, in run
                                                                                        self.build_extension(ext)
                                                                                      File "C:\Users\amade\AppData\Local\Temp\pip-install-_k5e982w\dlib_237006073dfd4b13993bf60b7ecb3629\setup.py", line 171, in build_extension
                                                                                        subprocess.check_call(cmake_setup, cwd=build_folder)
                                                                                      File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.1264.0_x64__qbz5n2kfra8p0\lib\subprocess.py", line 369, in check_call
                                                                                        raise CalledProcessError(retcode, cmd)
                                                                                    subprocess.CalledProcessError: Command '['cmake', 'C:\\Users\\amade\\AppData\\Local\\Temp\\pip-install-_k5e982w\\dlib_237006073dfd4b13993bf60b7ecb3629\\tools\\python', '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY=C:\\Users\\amade\\AppData\\Local\\Temp\\pip-install-_k5e982w\\dlib_237006073dfd4b13993bf60b7ecb3629\\build\\lib.win-amd64-3.10', '-DPYTHON_EXECUTABLE=C:\\Users\\amade\\AppData\\Local\\Microsoft\\WindowsApps\\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\\python.exe', '-DCMAKE_LIBRARY_OUTPUT_DIRECTORY_RELEASE=C:\\Users\\amade\\AppData\\Local\\Temp\\pip-install-_k5e982w\\dlib_237006073dfd4b13993bf60b7ecb3629\\build\\lib.win-amd64-3.10', '-A', 'x64']' returned non-zero exit status 1.
                                                                                    [end of output]
                                                                                  

                                                                                  note: This error originates from a subprocess, and is likely not a problem with pip. error: legacy-install-failure

                                                                                  × Encountered error while trying to install package. ╰─> dlib

                                                                                  note: This is an issue with the package mentioned above, not pip. hint: See above for output from the failure.

                                                                                  ANSWER

                                                                                  Answered 2022-Apr-03 at 09:50

                                                                                  Follow the steps below to install face_recognition python package on Windows 10.

                                                                                  The instruction has been tested on Windows 10 64bit, python 3.9.

                                                                                  Step 1

                                                                                  Download CMake installation package for your OS from official site

                                                                                  Step 2

                                                                                  Install downloaded CMake installation package. Please make sure that Add CMake to system PATH option has been selected during the installation.

                                                                                  Step 3

                                                                                  Reboot your OS (restart computer)

                                                                                  Step 4

                                                                                  Run pip install dlib It tooks several minutes so be prepared to wait

                                                                                  Step 5

                                                                                  If it completes without any errors, you're all set. Run pip install face_recognition to install face_recognition.

                                                                                  Result

                                                                                  Source https://stackoverflow.com/questions/71717231

                                                                                  QUESTION

                                                                                  how to extract self-defined ROI with dlib facelandmarks?
                                                                                  Asked 2022-Mar-29 at 10:31

                                                                                  I don't know how to extract the irregular area surrounded by green lines. i.e., the left cheek and the right cheek of a face.

                                                                                  from collections import OrderedDict
                                                                                  import numpy as np
                                                                                  import cv2
                                                                                  import dlib
                                                                                  import imutils
                                                                                  
                                                                                  CHEEK_IDXS = OrderedDict([("left_cheek", (1, 2, 3, 4, 5, 48, 31)),
                                                                                                            ("right_cheek", (11, 12, 13, 14, 15, 35, 54))
                                                                                                            ])
                                                                                  
                                                                                  detector = dlib.get_frontal_face_detector()
                                                                                  predictor = dlib.shape_predictor("shape_predictor_68_face_landmarks.dat")
                                                                                  
                                                                                  img = cv2.imread('Tom_Cruise.jpg')
                                                                                  img = imutils.resize(img, width=600)
                                                                                  
                                                                                  overlay = img.copy()
                                                                                  gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
                                                                                  
                                                                                  detections = detector(gray, 0)
                                                                                  for k, d in enumerate(detections):
                                                                                      shape = predictor(gray, d)
                                                                                      for (_, name) in enumerate(CHEEK_IDXS.keys()):
                                                                                          pts = np.zeros((len(CHEEK_IDXS[name]), 2), np.int32)
                                                                                          for i, j in enumerate(CHEEK_IDXS[name]):
                                                                                              pts[i] = [shape.part(j).x, shape.part(j).y]
                                                                                  
                                                                                          pts = pts.reshape((-1, 1, 2))
                                                                                          cv2.polylines(overlay, [pts], True, (0, 255, 0), thickness=2)
                                                                                  
                                                                                      cv2.imshow("Image", overlay)
                                                                                  
                                                                                      cv2.waitKey(0)
                                                                                      if cv2.waitKey(1) & 0xFF == ord('q'):
                                                                                          break
                                                                                  cv2.destroyAllWindows()
                                                                                  

                                                                                  I know if just simply extract a rectangular area from the face as cheeks, the code can be like this

                                                                                  ROI1 = img[shape[29][1]:shape[33][1], shape[54][0]:shape[12][0]] #right cheeks
                                                                                  ROI1 = img[shape[29][1]:shape[33][1], shape[4][0]:shape[48][0]] #left cheek
                                                                                  

                                                                                  but I want to extract the irregular area for subsequent processing, how can i do it ?

                                                                                  ANSWER

                                                                                  Answered 2022-Mar-29 at 10:31

                                                                                  You can accomplish this by two simple steps:

                                                                                  1. Create a mask using the point coordinates you have
                                                                                  2. Execute bitwise_and operation (crop)

                                                                                  Code:

                                                                                  cv2.drawContours(mask, [pts], -1, (255, 255, 255), -1, cv2.LINE_AA)
                                                                                  output = cv2.bitwise_and(img, img, mask=mask)
                                                                                  

                                                                                  Output:

                                                                                  Additionally, if you want to focus on the cropped polygons, you can create a bounding rectangle to the polygons then crop from the output frame like tihs:

                                                                                  # Create a bounding rects list at global level
                                                                                  bounding_rects = []
                                                                                  
                                                                                  # Calculate Bounding Rects for each pts array inside the for loop
                                                                                  bounding_rects.append(cv2.boundingRect(pts))
                                                                                  
                                                                                  # Assign geometrical values to variables to crop (Use a range(len(bounding_boxes)) for loop here)
                                                                                  enter code here
                                                                                  x1,y1,w1,h1 = bounding_rects[0]
                                                                                  x2,y2,w2,h2, = bounding_rects[1]
                                                                                  
                                                                                  # At the end of the program, crop the bounding boxes from output
                                                                                  cropped1= output[y1:y1+h1, x1:x1+w1]
                                                                                  cropped2= output[y2:y2+h2, x2:x2+w2]
                                                                                  

                                                                                  Output:

                                                                                  Source https://stackoverflow.com/questions/71659570

                                                                                  QUESTION

                                                                                  Why such different answers for the xgboost scikit-learn interface?
                                                                                  Asked 2022-Mar-21 at 22:59

                                                                                  I am using xgboost for the first time and trying the two different interfaces. First I get the data:

                                                                                  import xgboost as xgb
                                                                                  import dlib
                                                                                  import pandas as pd
                                                                                  import numpy as np
                                                                                  from sklearn.model_selection import cross_val_score
                                                                                  data_url = "http://lib.stat.cmu.edu/datasets/boston"
                                                                                  raw_df = pd.read_csv(data_url, sep="\s+", skiprows=22, header=None)
                                                                                  X = np.hstack([raw_df.values[::2, :], raw_df.values[1::2, :2]])
                                                                                  y = raw_df.values[1::2, 2]
                                                                                  dmatrix = xgb.DMatrix(data=X, label=y)
                                                                                  

                                                                                  Now the scikit-learn interface:

                                                                                  xgbr = xgb.XGBRegressor(objective='reg:squarederror', seed=20)
                                                                                  print(cross_val_score(xgbr, X, y, cv=5))
                                                                                  

                                                                                  This outputs:

                                                                                  [0.73438184 0.84902986 0.82579692 0.52374618 0.29743001]
                                                                                  

                                                                                  Now the xgboost native interface:

                                                                                  dmatrix = xgb.DMatrix(data=X, label=y)
                                                                                  params={'objective':'reg:squarederror'}
                                                                                  cv_results =  xgb.cv(dtrain=dmatrix, params=params, nfold=5, metrics={'rmse'},  seed=20)
                                                                                  print('RMSE: %.2f' % cv_results['test-rmse-mean'].min())
                                                                                  

                                                                                  This gives 3.50.

                                                                                  Why are the outputs so different? What am I doing wrong?

                                                                                  ANSWER

                                                                                  Answered 2022-Mar-21 at 22:59

                                                                                  First of all, you didn't specify the metric in cross_val_score, therefore you are not calculating RMSE, but rather the estimator's default metric, which is usually just its loss function. You need to specify it for comparable results:

                                                                                  cross_val_score(xgbr, X, y, cv=5, scoring = 'neg_root_mean_squared_error')
                                                                                  

                                                                                  Second, you need to match sklearn's CV procedure exactly. For that, you can pass folds argument to XGBoost's cv method:

                                                                                  from sklearn.model_selection import KFold
                                                                                  
                                                                                  cv_results =  xgb.cv(dtrain=dmatrix, params=params, metrics={'rmse'}, folds = KFold(n_splits=5))
                                                                                  

                                                                                  Finally, you need to ensure that XGBoost's cv procedure actually converges. For some reason it only does 10 boosting rounds by default, which is too low to converge on your dataset. This is done via nrounds argument (num_boost_round if you're on an older version), I found that 100 rounds work just fine on this dataset:

                                                                                  cv_results =  xgb.cv(dtrain=dmatrix, params=params, metrics={'rmse'}, folds = KFold(n_splits=5), nrounds = 100)
                                                                                  

                                                                                  Now you will get matching results.

                                                                                  On a side note, it's interesting how you say it's your first time using XGBoost, but you actually have a question on XGBoost dating back to 2017.

                                                                                  Source https://stackoverflow.com/questions/71562186

                                                                                  QUESTION

                                                                                  Recreating Global 3D Points, from Local 3D Points and Global 2D Points; SolvePnP
                                                                                  Asked 2022-Mar-11 at 08:38

                                                                                  Aloha, i have a list of 2D Keypoints which are located in the global scope/frame (image points), and a list of corresponding 3D Keypoints in the local scope (often called texture or object points). The image points are ranging from x[0-1920]y[0,1080] and the object points are withing the range of x[-1,1]y[-1,1]. I have followed the approach described in this paper on page 6 with the tutorial from here, but the output of my 3D points is not correct at all, the movement of the points is all over the place. Below is my approach using SolvePnP. Am I on the wrong track here, since SolvePnP is normally used for detecting the camera movement (open for other suggestions!) or is my method wrong?

                                                                                  import numpy as np
                                                                                  import cv2
                                                                                  array = np.array # convenience
                                                                                  
                                                                                  frame1_2d = \
                                                                                  array([[1033.9708251953125 ,  344.23065185546875],
                                                                                         [1077.796630859375  ,  617.1146240234375 ],
                                                                                         [ 958.2716674804688 ,  609.1179809570312 ],
                                                                                         [1074.8084716796875 ,  782.0444946289062 ],
                                                                                         [ 975.2044067382812 ,  418.1991882324219 ],
                                                                                         [1024.0103759765625 ,  931.980712890625  ],
                                                                                         [1122.6185302734375 ,  605.1196899414062 ],
                                                                                         [1096.721435546875  ,  418.1991882324219 ],
                                                                                         [ 999.109375        ,  617.1146240234375 ],
                                                                                         [ 962.255859375     ,  518.1566772460938 ],
                                                                                         [1111.662109375     ,  517.1571044921875 ],
                                                                                         [1014.0499877929688 ,  782.0444946289062 ],
                                                                                         [1061.8599853515625 ,  930.9811401367188 ]])
                                                                                  frame1_3d = \
                                                                                  array([[-0.01265097688883543   , -0.4992150068283081    , -0.11455678939819336   ],
                                                                                         [ 0.10584918409585953   , -0.0018199272453784943 ,  0.0023642126470804214 ],
                                                                                         [-0.14271944761276245   ,  0.06332945823669434   ,  0.1438678503036499    ],
                                                                                         [ 0.09254898130893707   ,  0.3176574409008026    , -0.17930322885513306   ],
                                                                                         [-0.1155640035867691    , -0.4058316648006439    ,  0.00021289288997650146],
                                                                                         [-0.03301446512341499   ,  0.6519031524658203    , -0.3515356183052063    ],
                                                                                         [ 0.14540529251098633   ,  0.05645819008350372   ,  0.10776595026254654   ],
                                                                                         [ 0.10836226493120193   , -0.4078497290611267    ,  0.000870194286108017  ],
                                                                                         [-0.10584865510463715   ,  0.001818838994950056  , -0.0023612845689058304 ],
                                                                                         [-0.1546039581298828    , -0.17418316006660461   ,  0.10266228020191193   ],
                                                                                         [ 0.1590884029865265    , -0.17913128435611725   ,  0.09423552453517914   ],
                                                                                         [-0.0736076831817627    ,  0.3179360628128052    , -0.17892584204673767   ],
                                                                                         [ 0.05236409604549408   ,  0.6490492820739746    , -0.33908188343048096   ]])
                                                                                  
                                                                                  frame2_2d = \
                                                                                  array([[1028.110107421875  ,  327.7352600097656 ],
                                                                                         [1068.0904541015625 ,  606.7128295898438 ],
                                                                                         [ 982.1328125       ,  229.74314880371094],
                                                                                         [1071.0889892578125 ,  778.698974609375  ],
                                                                                         [ 979.13427734375   ,  403.7291564941406 ],
                                                                                         [1013.1174926757812 ,  933.6865234375    ],
                                                                                         [1069.0899658203125 ,  243.7420196533203 ],
                                                                                         [1080.08447265625   ,  403.7291564941406 ],
                                                                                         [ 997.1254272460938 ,  616.7119750976562 ],
                                                                                         [ 983.13232421875   ,  312.7364501953125 ],
                                                                                         [1071.0889892578125 ,  317.7360534667969 ],
                                                                                         [1005.1214599609375 ,  778.698974609375  ],
                                                                                         [1061.0938720703125 ,  936.686279296875  ]])
                                                                                  
                                                                                  frame2_3d = \
                                                                                  array([[-0.0004756036214530468, -0.5245562791824341   , -0.010652128607034683 ],
                                                                                         [ 0.10553547739982605  , -0.00272204983048141  ,  0.0024587283842265606],
                                                                                         [-0.1196068525314331   , -0.6828885078430176   , -0.14210689067840576  ],
                                                                                         [ 0.0845363438129425   ,  0.38039350509643555  , -0.028144780546426773 ],
                                                                                         [-0.11286421865224838  , -0.4302292466163635   ,  0.06919233500957489  ],
                                                                                         [-0.030065223574638367 ,  0.754790186882019    ,  0.012936152517795563 ],
                                                                                         [ 0.1010960042476654   , -0.6289429664611816   , -0.11814753711223602  ],
                                                                                         [ 0.1058841198682785   , -0.4253752827644348   ,  0.08086629956960678  ],
                                                                                         [-0.10553570091724396  ,  0.002716599963605404 , -0.0024500866420567036],
                                                                                         [-0.127223938703537    , -0.5319695472717285   , -0.09722068160772324  ],
                                                                                         [ 0.11508879065513611  , -0.49151480197906494  , -0.07002018392086029  ],
                                                                                         [-0.06679684668779373  ,  0.38714516162872314  , -0.023669833317399025 ],
                                                                                         [ 0.05081187188625336  ,  0.7544023990631104   , -0.011078894138336182 ]])
                                                                                  
                                                                                  frame3_2d = \
                                                                                  array([[1027.91845703125   ,  338.2441711425781 ],
                                                                                         [1067.8787841796875 ,  612.0115356445312 ],
                                                                                         [ 803.141357421875  ,  500.10662841796875],
                                                                                         [1070.8758544921875 ,  776.8713989257812 ],
                                                                                         [ 968.9768676757812 ,  413.18048095703125],
                                                                                         [1012.9332885742188 ,  925.7449340820312 ],
                                                                                         [1248.699462890625  ,  491.1142578125    ],
                                                                                         [1089.8570556640625 ,  412.18133544921875],
                                                                                         [ 995.9501342773438 ,  611.0123901367188 ],
                                                                                         [ 871.073974609375  ,  461.1397399902344 ],
                                                                                         [1181.765869140625  ,  454.14569091796875],
                                                                                         [1003.9421997070312 ,  775.8722534179688 ],
                                                                                         [1061.884765625     ,  933.7380981445312 ]])
                                                                                  
                                                                                  frame3_3d = \
                                                                                  array([[-0.003511453978717327  , -0.5015891194343567    , -0.10520103573799133   ],
                                                                                         [ 0.10480749607086182   , -0.00019206921570003033, -0.0004397481679916382 ],
                                                                                         [-0.47764456272125244   , -0.1816674768924713    ,  0.04093759506940842   ],
                                                                                         [ 0.0936243087053299    ,  0.3628539443016052    , -0.09391097724437714   ],
                                                                                         [-0.11445926129817963   , -0.41107428073883057   ,  0.01644478738307953   ],
                                                                                         [-0.03567686676979065   ,  0.720417320728302     , -0.10493464022874832   ],
                                                                                         [ 0.4529808759689331    , -0.18383921682834625   , -0.02210136130452156   ],
                                                                                         [ 0.1092790886759758    , -0.41095152497291565   ,  0.011709243059158325  ],
                                                                                         [-0.10480757057666779   ,  0.00018716813065111637,  0.0004445519298315048 ],
                                                                                         [-0.3031604290008545    , -0.2810187041759491    ,  0.07747684419155121   ],
                                                                                         [ 0.3006024956703186    , -0.28319910168647766   ,  0.043038371950387955  ],
                                                                                         [-0.07087739557027817   ,  0.35837966203689575   , -0.08430898934602737   ],
                                                                                         [ 0.062416717410087585  ,  0.7248380780220032    , -0.13536334037780762   ]])
                                                                                  
                                                                                  #frame1_2d = np.asarray(frame1_2d, dtype=float)
                                                                                  #frame1_3d = np.asarray(frame1_3d, dtype=float)
                                                                                  #frame2_2d = np.asarray(frame2_2d, dtype=float)
                                                                                  #frame2_3d = np.asarray(frame2_3d, dtype=float)
                                                                                  #frame3_2d = np.asarray(frame3_2d, dtype=float)
                                                                                  #frame3_3d = np.asarray(frame3_3d, dtype=float)
                                                                                  
                                                                                  # Globalize 3D Points
                                                                                  dist_coeffs = (0.11480806073904032, -0.21946985653851792, 0.0012002116999769957, 0.008564577708855225, 0.11274677130853494)
                                                                                  camera_matrix = np.asarray([
                                                                                      [1394.6027293299926, 0.0, 995.588675691456],
                                                                                      [0.0, 1394.6027293299926, 599.3212928484164],
                                                                                      [0.0, 0.0, 1]
                                                                                  ])
                                                                                  
                                                                                  
                                                                                  # create rotation matrix of points
                                                                                  (success, rotation_vector, translation_vector) = cv2.solvePnP(frame3_3d, frame3_2d, camera_matrix, dist_coeffs, flags=0)
                                                                                  r_matrix = cv2.Rodrigues(rotation_vector)
                                                                                  rotation_matrix = np.zeros((4, 4))
                                                                                  rotation_matrix[:3, :3], _ = cv2.Rodrigues(rotation_vector)
                                                                                  rotation_matrix[:3, 3] = np.transpose(translation_vector)
                                                                                  rotation_matrix[3, 3] = 1
                                                                                  
                                                                                  # apply rotation matrix to points
                                                                                  globalized_3d = np.c_[frame1_3d, np.ones((13, 1))]
                                                                                  for j in range(13):
                                                                                      globalized_3d[j, :] = np.dot(rotation_matrix, globalized_3d[j, :])
                                                                                  print(globalized_3d)
                                                                                  

                                                                                  Thanks in advance, appreciate any help!

                                                                                  Edit: Included some examples in my code, after improving the things suggested by top answer

                                                                                  Edit2: Using flag=1 significantly improved the performance/ reduced a lot of the jitter!

                                                                                  ANSWER

                                                                                  Answered 2022-Mar-09 at 13:13
                                                                                  1. yes, solvePnP is okay to use
                                                                                  2. yes, your math is wrong

                                                                                  I'll assume that you get your points from a face landmark detector, so they have a fixed order. I'll also assume that your 3D model points are given in the same order and their values are consistent and somewhat similar to the face you look at. You should exclude points that denote flesh and mandible (as opposed to skull bone). You actually want to track the skull, not the position of lips and jaws that move all over the place.

                                                                                  rvec is an axis-angle encoding. Its length is the amount of rotation (expected between 0 and 3.14=pi) and its direction is the axis of rotation.

                                                                                  Use cv.Rodrigues to turn the rvec into a 3x3 rotation matrix.

                                                                                  In fact, just build yourself some functions that take rvec and tvec and build a 4x4 matrix. Extending all points to be (x,y,z,1) is a hassle but only once.

                                                                                  And make sure you use @ for matrix multiplication (or np.dot, np.matmul, ...) because * is element-wise multiplication.

                                                                                  Source https://stackoverflow.com/questions/71408443

                                                                                  QUESTION

                                                                                  "ERROR: CMake must be installed to build dlib" when installing face_recognition
                                                                                  Asked 2022-Mar-01 at 22:50

                                                                                  I'm facing this error while installing face_recognition in a virtualenv with Python 3.8.10 on Ubuntu 20.04.

                                                                                  ERROR: Failed building wheel for face-recognition-models
                                                                                    Running setup.py clean for face-recognition-models
                                                                                  Failed to build dlib face-recognition-models
                                                                                  Installing collected packages: Click, numpy, dlib, face-recognition-models, face-recognition
                                                                                      Running setup.py install for dlib ... error
                                                                                      ERROR: Command errored out with exit status 1:
                                                                                       command: '/home/badrelden/Desktop/test python sound/venv/bin/python3' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-tai2snq9/dlib/setup.py'"'"'; __file__='"'"'/tmp/pip-install-tai2snq9/dlib/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-bhy0sde6/install-record.txt --single-version-externally-managed --compile --install-headers '/home/badrelden/Desktop/test python sound/venv/include/site/python3.8/dlib'
                                                                                           cwd: /tmp/pip-install-tai2snq9/dlib/
                                                                                      Complete output (8 lines):
                                                                                      running install
                                                                                      running build
                                                                                      running build_py
                                                                                      package init file 'tools/python/dlib/__init__.py' not found (or not a regular file)
                                                                                      running build_ext
                                                                                      
                                                                                      ERROR: CMake must be installed to build dlib
                                                                                      
                                                                                      ----------------------------------------
                                                                                  ERROR: Command errored out with exit status 1: '/home/badrelden/Desktop/test python sound/venv/bin/python3' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/tmp/pip-install-tai2snq9/dlib/setup.py'"'"'; __file__='"'"'/tmp/pip-install-tai2snq9/dlib/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(__file__);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, __file__, '"'"'exec'"'"'))' install --record /tmp/pip-record-bhy0sde6/install-record.txt --single-version-externally-managed --compile --install-headers '/home/badrelden/Desktop/test python sound/venv/include/site/python3.8/dlib' Check the logs for full command output.
                                                                                  

                                                                                  ANSWER

                                                                                  Answered 2022-Mar-01 at 22:49
                                                                                  ERROR: CMake must be installed to build dlib
                                                                                  

                                                                                  This is the key part of the error message. You need to install cmake, which can be done by running sudo apt install cmake on Debian-based systems, including Ubuntu. After cmake is installed, you can rerun the pip install command.

                                                                                  Source https://stackoverflow.com/questions/71315443

                                                                                  QUESTION

                                                                                  dlib import failure on M1 pro
                                                                                  Asked 2022-Jan-26 at 15:04

                                                                                  after installing dlib successfully on my m1 pro mac (monterey 12.1)

                                                                                  whenever I try to import dlib, I am receiving the following error:

                                                                                       from _dlib_pybind11 import *
                                                                                    File "/Applications/PyCharm.app/Contents/plugins/python/helpers/pydev/_pydev_bundle/pydev_import_hook.py", line 21, in do_import
                                                                                  module = self._system_import(name, *args, **kwargs)
                                                                                  ImportError: 
                                                                                  dlopen(path/to/venv/lib/python3.10/site-packages/_dlib_pybind11.cpython-310-darwin.so, 0x0002): tried: '/path/to/venv/lib/python3.10/site-packages/_dlib_pybind11.cpython-310-darwin.so' (mach-o file, but is an incompatible architecture (have 'x86_64', need 'arm64e')), '/usr/lib/_dlib_pybind11.cpython-310-darwin.so' (no such file)
                                                                                  

                                                                                  ANSWER

                                                                                  Answered 2022-Jan-26 at 15:04

                                                                                  installing dlib==19.23.0 solves this issue

                                                                                  Source https://stackoverflow.com/questions/70771196

                                                                                  QUESTION

                                                                                  Bounding boxes returned without detected face image in dlib python
                                                                                  Asked 2022-Jan-18 at 13:43

                                                                                  I'm trying to detect multiple faces in a picture using the deepface library with dlib as the backend detector. I'm using the DlibWrapper.py from the deeepface library and i have the following issue: In some cases, the detector returns the bounding box coordinates but doesn't return the detected face image detected face-box coordinates.

                                                                                  I was wondering if this bug occurs because of the negative values of some coordinates of the bounding boxes but i figured out that was not the case, as the negative values are features, not bugs. Here is the DlibWrapper from the deepface library.

                                                                                  ANSWER

                                                                                  Answered 2022-Jan-18 at 13:43

                                                                                  Solved!There are edge cases where original rectangle is partially outside the image window. That happens with dlib. So, instead of

                                                                                  • detected_face = img[top:bottom, left:right],

                                                                                  the detected face should be

                                                                                  • detected_face = img[max(0, top): min(bottom, img_height), max(0, left): min(right, img_width)]

                                                                                  Source https://stackoverflow.com/questions/70754730

                                                                                  QUESTION

                                                                                  Overloading member operator,?
                                                                                  Asked 2021-Dec-26 at 19:01
                                                                                  #include 
                                                                                  #include 
                                                                                  
                                                                                  struct Matrix;
                                                                                  
                                                                                  struct literal_assignment_helper
                                                                                  {
                                                                                      mutable int r;
                                                                                      mutable int c;
                                                                                      Matrix& matrix;
                                                                                  
                                                                                      explicit literal_assignment_helper(Matrix& matrix)
                                                                                              : matrix(matrix), r(0), c(1) {}
                                                                                  
                                                                                      const literal_assignment_helper& operator,(int number) const;
                                                                                  };
                                                                                  
                                                                                  struct Matrix
                                                                                  {
                                                                                      int rows;
                                                                                      int columns;
                                                                                      std::vector numbers;
                                                                                  
                                                                                      Matrix(int rows, int columns)
                                                                                          : rows(rows), columns(columns), numbers(rows * columns) {}
                                                                                  
                                                                                      literal_assignment_helper operator=(int number)
                                                                                      {
                                                                                          numbers[0] = number;
                                                                                          return literal_assignment_helper(*this);
                                                                                      }
                                                                                  
                                                                                      int* operator[](int row) { return &numbers[row * columns]; }
                                                                                  };
                                                                                  
                                                                                  const literal_assignment_helper& literal_assignment_helper::operator,(int number) const
                                                                                  {
                                                                                      matrix[r][c] = number;
                                                                                  
                                                                                      c++;
                                                                                      if (c == matrix.columns)
                                                                                          r++, c = 0;
                                                                                  
                                                                                      return *this;
                                                                                  };
                                                                                  
                                                                                  
                                                                                  int main()
                                                                                  {
                                                                                      int rows = 3, columns = 3;
                                                                                  
                                                                                      Matrix m(rows, columns);
                                                                                      m = 1, 2, 3,
                                                                                          4, 5, 6,
                                                                                          7, 8, 9;
                                                                                  
                                                                                      for (int i = 0; i < rows; i++)
                                                                                      {
                                                                                          for (int j = 0; j < columns; j++)
                                                                                              std::cout << m[i][j] << ' ';
                                                                                          std::cout << std::endl;
                                                                                      }
                                                                                  }
                                                                                  

                                                                                  This code is inspired by the matrix class in the DLib library.

                                                                                  This code allows for assigning literal values separated by commas like this:

                                                                                  Matrix m(rows, columns);
                                                                                  m = 1, 2, 3,
                                                                                      4, 5, 6,
                                                                                      7, 8, 9;
                                                                                  

                                                                                  Note that you can't do something like this:

                                                                                  Matrix m = 1, 2, 3, ...
                                                                                  

                                                                                  This is because the constructor can't return a reference to another object, unlike the operator=.

                                                                                  Here in this code, if literal_assignment_helper::operator, is not const, this chaining of numbers doesn't work, the comma-separated numbers are considered to be comma-separated expressions.

                                                                                  Why must the operator be const? What are the rules here?

                                                                                  Also, what is the impact of an operator, that is not const? Is it ever going to be called?

                                                                                  ANSWER

                                                                                  Answered 2021-Dec-26 at 19:01
                                                                                  const literal_assignment_helper& operator,(int number) const;
                                                                                  

                                                                                  The comma operators, both in the helper and in the Matrix, return a const reference. So to call a member on that reference, the member function/operator has to be const qualified.

                                                                                  If you remove all the constness, like

                                                                                  literal_assignment_helper& operator,(int number);
                                                                                  

                                                                                  that also seems to work.

                                                                                  Source https://stackoverflow.com/questions/70489004

                                                                                  QUESTION

                                                                                  Sorting a tensor list in ascending order
                                                                                  Asked 2021-Dec-05 at 21:29

                                                                                  I am working on a facial comparison app that will give me the closest n number of faces to my target face.

                                                                                  I have done this with dlib/face_recognition as it uses numpy arrays, however i am now trying to do the same thing with facenet/pytorch and running into an issue because it uses tensors.

                                                                                  I have created a database of embeddings and I am giving the function one picture to compare to them. What i would like is for it to sort the list from lowest distances to highest, and give me the lowest 5 results or so.

                                                                                  here is the code I am working on that is doing the comparison. at this point i am feeding it a photo and asking it to compare against the embedding database.

                                                                                  def face_match(img_path, data_path): # img_path= location of photo, data_path= location of data.pt 
                                                                                      # getting embedding matrix of the given img
                                                                                      img_path = (os.getcwd()+'/1.jpg')
                                                                                      img = Image.open(img_path)
                                                                                      face = mtcnn(img) # returns cropped face and probability
                                                                                      emb = resnet(face.unsqueeze(0)).detach() # detech is to make required gradient false
                                                                                  
                                                                                      saved_data = torch.load('data.pt') # loading data.pt file
                                                                                      embedding_list = saved_data[0] # getting embedding data
                                                                                      name_list = saved_data[1] # getting list of names
                                                                                      dist_list = [] # list of matched distances, minimum distance is used to identify the person
                                                                                      
                                                                                      for idx, emb_db in enumerate(embedding_list):
                                                                                          dist = torch.dist(emb, emb_db)
                                                                                          dist_list.append(dist)
                                                                                      
                                                                                      namestodistance = list(zip(name_list,dist_list))
                                                                                      
                                                                                      print(namestodistance)
                                                                                  
                                                                                  face_match('1.jpg', 'data.pt')
                                                                                  

                                                                                  This results in giving me all the names and their distance from the target photo in alphabetical order of the names, in the form of (Adam Smith, tensor(1.2123432)), Brian Smith, tensor(0.6545464) etc. If the 'tensor' wasn't part of every entry I think it would be no problem to sort it. I don't quite understand why its being appended to the entries. I can cut this down to the best 5 by adding [0:5] at the end of dist_list, but I can't figure out how to sort the list, I think the problem is the word tensor being in every entry.

                                                                                  I have tried for idx, emb_db in enumerate(embedding_list): dist = torch.dist(emb, emb_db) sorteddist = torch.sort(dist) but for whatever reason this only returns one distance value, and it isn't the smallest one.

                                                                                  idx_min = dist_list.index(min(dist_list)), this works fine in giving me the lowest value and then matching it to a name using namelist[idx_min], therefore giving the best match, but I would like the best 5 matches in order as opposed to just the best match.

                                                                                  Anyone able to solve this ?

                                                                                  ANSWER

                                                                                  Answered 2021-Dec-05 at 16:43

                                                                                  Unfortunately I cannot test your code, but to me it seems like you are operation on a python list of tuples. You can sort that by using a key:

                                                                                  namestodistance = [('Alice', .1), ('Bob', .3), ('Carrie', .2)]
                                                                                  names_top = sorted(namestodistance, key=lambda x: x[1])
                                                                                  print(names_top[:2])
                                                                                  

                                                                                  Of course you have to modify the anonymous function in key to return a sortable value instead of e.g. a torch.tensor.

                                                                                  This can be done by using key = lambda x: x[1].item().

                                                                                  Edit: To answer the question that crept up in the comments, we can refactor our code a little. Namely

                                                                                  namestodistance = list(map(lambda x: (x[0], x[1].item()), namestodistance)
                                                                                  names_top = sorted(namestodistance, key=lambda x: x[1])
                                                                                  print(names_top[:2])
                                                                                  

                                                                                  Source https://stackoverflow.com/questions/70232894

                                                                                  QUESTION

                                                                                  How to shrink/expand facial features using Opencv?
                                                                                  Asked 2021-Nov-10 at 06:09

                                                                                  I'm trying write an application make parts of face image bigger or smaller with opencv and dlib. I detect facial landmarks using shape_predictor_68_face_landmarks.dat. In the following function, the tmp variable is supposed to be transformed in such a way that scale nose or left eye on image.

                                                                                  def visualize_facial_landmarks(image, shape, colors=None, alpha=0.75):
                                                                                  # create two copies of the input image -- one for the
                                                                                  # overlay and one for the final output image
                                                                                  overlay = image.copy()
                                                                                  output = image.copy()
                                                                                  
                                                                                  # if the colors list is None, initialize it with a unique
                                                                                  # color for each facial landmark region
                                                                                  if colors is None:
                                                                                      colors = [(19, 199, 109), (79, 76, 240), (230, 159, 23),
                                                                                                (168, 100, 168), (158, 163, 32),
                                                                                                (163, 38, 32), (180, 42, 220)]
                                                                                  
                                                                                  # loop over the facial landmark regions individually
                                                                                  for (i, name) in enumerate(FACIAL_LANDMARKS_INDEXES.keys()):
                                                                                      # grab the (x, y)-coordinates associated with the
                                                                                      # face landmark
                                                                                      (j, k) = FACIAL_LANDMARKS_INDEXES[name]
                                                                                      pts = shape[j:k]
                                                                                   
                                                                                      facial_features_cordinates[name] = pts
                                                                                      if name != "Jaw" and name == "Left_Eye" or name == "Nose":
                                                                                          minX = min(pts[:,0])
                                                                                          maxX = max(pts[:,0])
                                                                                          minY = min(pts[:,1])
                                                                                          maxY = max(pts[:,1]) 
                                                                                  
                                                                                          rect = []
                                                                                          rect.append([minX, minY])
                                                                                          rect.append([minX, maxY])
                                                                                          rect.append([maxX, minY])
                                                                                          rect.append([maxX, maxY])
                                                                                          rect = np.array(rect)
                                                                                  
                                                                                          hull = cv2.convexHull(rect)
                                                                                          # print(hull)
                                                                                          # output = cv2.resize(overlay, dsize)
                                                                                          # print(overlay[minX:maxX,minY:maxX,:])
                                                                                          tmp = overlay[minY:maxY, minX:maxX, :]
                                                                                          print(tmp.shape)
                                                                                          s = 2
                                                                                  
                                                                                          Affine_Mat_w = [s, 0, tmp.shape[0]/2.0 - s*tmp.shape[0]/2.0]
                                                                                          Affine_Mat_h = [0, s, tmp.shape[1]/2.0 - s*tmp.shape[1]/2.0]
                                                                                  
                                                                                  
                                                                                          M = np.c_[ Affine_Mat_w, Affine_Mat_h].T 
                                                                                  
                                                                                          tmp = cv2.warpAffine(tmp, M, (tmp.shape[1], tmp.shape[0]))
                                                                                          overlay[minY:maxY, minX:maxX, :] = tmp
                                                                                  
                                                                                  return overlay
                                                                                  

                                                                                  As an example the following image attached:

                                                                                  ANSWER

                                                                                  Answered 2021-Nov-10 at 06:09
                                                                                  Update #1

                                                                                  Applying pinch and bulge distortion along facial landmarks in small amounts around eyes and nose could probably provide decent results without moving into another method. Though there is a chance it will also noticeably distort eyeglasses if it affects a larger area. These should help,

                                                                                  I am not sure how to do this in opencv alone without face looking unnatural. Below is a general explanation based on my own exploration. Feel free to correct me if anything if I made any mistake.

                                                                                  3D Mesh

                                                                                  One way I think current face beautification methods such as those on Android cameras work is to align a 3d face mesh or a whole head model on top of original face.

                                                                                  It extracts face texture using using facial landmarks and aligns them with corresponding 3d mesh with texture applied to it. This way the 3d mesh can be adjusted and texture will follow face geometry. There are probably additional steps such as passing the result to another network, post-processing involved to make it look more natural.

                                                                                  Mediapipe Face Mesh will probably be helpful also as it provides dense 3d face landmarks with 3D face models, UV visualization, coordinates. This is a discussion for UV unwrapping of face in mediapipe.

                                                                                  Example from, https://github.com/YadiraF/DECA.

                                                                                  Example from, https://github.com/sicxu/Deep3DFaceRecon_pytorch.

                                                                                  GAN

                                                                                  Another way is to use GANs to edit facial features, apply lighting, makeup etc.

                                                                                  Example from, https://github.com/run-youngjoo/SC-FEGAN.

                                                                                  Another example, https://github.com/genforce/idinvert_pytorch.

                                                                                  Source https://stackoverflow.com/questions/69887034

                                                                                  Community Discussions, Code Snippets contain sources that include Stack Exchange Network

                                                                                  Vulnerabilities

                                                                                  No vulnerabilities reported

                                                                                  Install dlib

                                                                                  You can download it from GitHub.

                                                                                  Support

                                                                                  For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
                                                                                  Find more information at:
                                                                                  Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
                                                                                  Find more libraries
                                                                                  Explore Kits - Develop, implement, customize Projects, Custom Functions and Applications with kandi kits​
                                                                                  Save this library and start creating your kit
                                                                                  Install
                                                                                • PyPI

                                                                                  pip install dlib

                                                                                • CLONE
                                                                                • HTTPS

                                                                                  https://github.com/davisking/dlib.git

                                                                                • CLI

                                                                                  gh repo clone davisking/dlib

                                                                                • sshUrl

                                                                                  git@github.com:davisking/dlib.git

                                                                                • Share this Page

                                                                                  share link

                                                                                  Consider Popular Machine Learning Libraries

                                                                                  tensorflow

                                                                                  by tensorflow

                                                                                  youtube-dl

                                                                                  by ytdl-org

                                                                                  models

                                                                                  by tensorflow

                                                                                  pytorch

                                                                                  by pytorch

                                                                                  keras

                                                                                  by keras-team

                                                                                  Try Top Libraries by davisking

                                                                                  dlib-models

                                                                                  by daviskingC++

                                                                                  pyimageconf2018

                                                                                  by daviskingC++

                                                                                  Compare Machine Learning Libraries with Highest Support

                                                                                  youtube-dl

                                                                                  by ytdl-org

                                                                                  scikit-learn

                                                                                  by scikit-learn

                                                                                  models

                                                                                  by tensorflow

                                                                                  tensorflow

                                                                                  by tensorflow

                                                                                  keras

                                                                                  by keras-team

                                                                                  Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
                                                                                  Find more libraries
                                                                                  Explore Kits - Develop, implement, customize Projects, Custom Functions and Applications with kandi kits​
                                                                                  Save this library and start creating your kit