face-alignment | : fire : 2D and 3D Face alignment library build using pytorch | Computer Vision library

 by   1adrianb Python Version: 1.4.1 License: BSD-3-Clause

kandi X-RAY | face-alignment Summary

kandi X-RAY | face-alignment Summary

face-alignment is a Python library typically used in Artificial Intelligence, Computer Vision, Deep Learning, Pytorch applications. face-alignment has no bugs, it has no vulnerabilities, it has build file available, it has a Permissive License and it has high support. You can install using 'pip install face-alignment' or download it from GitHub, PyPI.

:fire: 2D and 3D Face alignment library build using pytorch

            kandi-support Support

              face-alignment has a highly active ecosystem.
              It has 6309 star(s) with 1287 fork(s). There are 171 watchers for this library.
              There were 4 major release(s) in the last 6 months.
              There are 61 open issues and 233 have been closed. On average issues are closed in 124 days. There are 7 open pull requests and 0 closed requests.
              It has a positive sentiment in the developer community.
              The latest version of face-alignment is 1.4.1

            kandi-Quality Quality

              face-alignment has 0 bugs and 0 code smells.

            kandi-Security Security

              face-alignment has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
              face-alignment code analysis shows 0 unresolved vulnerabilities.
              There are 0 security hotspots that need review.

            kandi-License License

              face-alignment is licensed under the BSD-3-Clause License. This license is Permissive.
              Permissive licenses have the least restrictions, and you can use them in most projects.

            kandi-Reuse Reuse

              face-alignment releases are available to install and integrate.
              Deployable package is available in PyPI.
              Build file is available. You can build the component from source.
              Installation instructions, examples and code snippets are available.
              face-alignment saves you 623 person hours of effort in developing the same functionality from scratch.
              It has 1348 lines of code, 97 functions and 24 files.
              It has high code complexity. Code complexity directly impacts maintainability of the code.

            Top functions reviewed by kandi - BETA

            kandi has reviewed face-alignment and discovered the below as its top functions. This is intended to give you an instant insight into face-alignment implemented functionality, and help decide if they suit your requirements.
            • Gets all Landmarks from an image
            • Get landmarks from image
            • Transform a point
            • Crops an image
            • Flips the image
            • Predict on the given image
            • Detect the face of the given image
            • Predict on a batch of images
            • Runs a batch of face detection
            • Compute the nms of a given threshold
            • Filter a list of bounding boxes
            • Detect the faces of the given image
            • Convert a tensor path to a numpy array
            • Find the version string
            • Read the contents of a file
            • Detect faces from a given image
            • Load anchors from file
            • Load anchors from numpy array
            • Detect face from image
            Get all kandi verified functions for this library.

            face-alignment Key Features

            No Key Features are available at this moment for face-alignment.

            face-alignment Examples and Code Snippets

            Pythondot img1Lines of Code : 151dot img1License : Permissive (MIT)
            copy iconCopy
            #/dev/mmcblk1 which is the sd card
            UUID=ff2b8c97-7882-4967-bc94-e41ed07f3b83 /media/mendel ext4 defaults 0 2
            $ cd /media/mendel
            # Create a swapfile else you'll run out of memory compiling.
            $ sudo mkdir swapfile
            # Now let's increase the size of swap  
            C++dot img2Lines of Code : 133dot img2no licencesLicense : No License
            copy iconCopy
             the pipeline is:
                (preprocessing) -> extractor -> filter -> classifier (or verifier)
            InsightFace in OneFlow,Preparations,Data preparations
            Pythondot img3Lines of Code : 68dot img3no licencesLicense : No License
            copy iconCopy
            ​    train.idx
            ​    train.rec
            ​    property
            ​    lfw.bin
            ​    cfp_fp.bin
            ​    agedb_30.bin
            python tools/mx_recordio_2_ofrecord_shuffled_npart.py  --data_dir datasets/faces_emore --output_filepath faces_emore/ofrecord/train --part_  
            face-alignment - detect landmarks in image
            Pythondot img4Lines of Code : 53dot img4License : Non-SPDX (BSD 3-Clause "New" or "Revised" License)
            copy iconCopy
            import face_alignment
            import matplotlib.pyplot as plt
            from mpl_toolkits.mplot3d import Axes3D
            from skimage import io
            import collections
            # Optionally set detector and some additional detector parameters
            face_detector = 'sfd'
            image aling with cv2 instead of HOG
            Pythondot img5Lines of Code : 14dot img5License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            def rect_to_bb(rect):
                # take a bounding predicted by dlib and convert it
                # to the format (x, y, w, h) as we would normally do
                # with OpenCV
                x = rect.left()
                y = rect.top()
                w = rect.right() - x
                h = rect.bottom() -
            Google Colaboratory: Unable to open landmarks.dat
            Pythondot img6Lines of Code : 3dot img6License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            !wget http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2
            !bunzip2 "shape_predictor_68_face_landmarks.dat.bz2"
            FaceAlign AttributeError: 'str' object has no attribute 'shape'
            Pythondot img7Lines of Code : 14dot img7License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            jobs = glob.glob("*.jpg")
            ##  # un-parallel
            for picname in jobs:
                aligned = FL.getAligns(picname)
            def getAligns(self,
                        use_cnn = False,
                        savepath = None,
            How to face alignment and crop?
            Pythondot img8Lines of Code : 4dot img8License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            images = dlib.get_face_chips(img, faces, size=320)
            image = dlib.get_face_chip(img, faces[0])
            TypeError: 'rectangle' object is not iterable after face alignment usinf Dlib FaceUtils
            Pythondot img9Lines of Code : 8dot img9License : Strong Copyleft (CC BY-SA 4.0)
            copy iconCopy
            for face in faces:
              x = face.left()
              y = face.top() #could be face.bottom() - not sure
              w = face.right() - face.left()
              h = face.bottom() - face.top()
              (x1, y1, w1, h1) = rect_to_bb(x,y,w,h)
              # rest same as above

            Community Discussions


            Why does Torch use ~700mb of GPU memory when predicting with a 1.5mb network
            Asked 2019-Apr-13 at 14:19

            I am very new to Torch/CUDA, and I'm trying to test the small binary network (~1.5mb) from https://github.com/1adrianb/binary-face-alignment, but I keep running into 'out of memory' issues.

            I am using a relatively weak GPU (NVIDIA Quadro K600) with ~900Mb of graphics memory on 16.04 Ubuntu with CUDA 10.0 and CudNN version 5.1. So I don't really care about performance, but I thought I would at least be able to run a small network for prediction, one image at a time (especially one that supposedly is aimed at those "with Limited Resources").

            I managed to run the code in headless mode and checked the memory consumption to be around 700Mb, which would explain why it fails immediately when I have an X-server running which takes around 250Mb of GPU memory.

            I also added some logs to see how far along main.lua I get, and it's the call output:copy(model:forward(img)) on the very first image that runs out of memory.

            For reference, here's the main.lua code up until the crash:



            Answered 2019-Apr-11 at 20:18

            What usually consumes most of the memory are the activation maps (and gradients, when training). I am not familiar with this particular model and implementation, but I would say that you are using a "fake" binary network; by fake I mean they still use floating-point numbers to represent the binary values since most users are going to use their code on GPUs that do not fully support real binary operations. The authors even write in Section 5:

            Performance. In theory, by replacing all floating-point multiplications with bitwise XOR and making use of the SWAR (Single instruction, multiple data within a register) [5], [6], the number of operations can be reduced up to 32x when compared against the multiplication-based convolution. However, in our tests, we observed speedups of up to 3.5x, when compared against cuBLAS, for matrix multiplications, a result being in accordance with those reported in [6]. We note that we did not conduct experiments on CPUs. However, given the fact that we used the same method for binarization as in [5], similar improvements in terms of speed, of the order of 58x, are to be expected: as the realvalued network takes 0.67 seconds to do a forward pass on a i7-3820 using a single core, a speedup close to x58 will allow the system to run in real-time. In terms of memory compression, by removing the biases, which have minimum impact (or no impact at all) on performance, and by grouping and storing every 32 weights in one variable, we can achieve a compression rate of 39x when compared against the single precision counterpart of Torch.

            In this context, a small model (w.r.t. number of parameters or model size in MiB) does not necessarily mean low memory footprint. It is likely that all this memory is being used to store the activation maps in single- or double-precision.

            Source https://stackoverflow.com/questions/55636577

            Community Discussions, Code Snippets contain sources that include Stack Exchange Network


            No vulnerabilities reported

            Install face-alignment

            The easiest way to install it is using either pip or conda:. Alternatively, bellow, you can find instruction to build it from source.


            All contributions are welcomed. If you encounter any issue (including examples of images where it fails) feel free to open an issue. If you plan to add a new features please open an issue to discuss this prior to making a pull request.
            Find more information at:

            Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items

            Find more libraries
          • PyPI

            pip install face-alignment

          • CLONE
          • HTTPS


          • CLI

            gh repo clone 1adrianb/face-alignment

          • sshUrl


          • Stay Updated

            Subscribe to our newsletter for trending solutions and developer bootcamps

            Agree to Sign up and Terms & Conditions

            Share this Page

            share link