kandi background
Explore Kits

face-alignment | 2D and 3D Face alignment library build using pytorch | Computer Vision library

 by   1adrianb Python Version: v1.3.4 License: BSD-3-Clause

 by   1adrianb Python Version: v1.3.4 License: BSD-3-Clause

Download this library from

kandi X-RAY | face-alignment Summary

face-alignment is a Python library typically used in Artificial Intelligence, Computer Vision, Deep Learning, Pytorch applications. face-alignment has no bugs, it has no vulnerabilities, it has build file available, it has a Permissive License and it has high support. You can install using 'pip install face-alignment' or download it from GitHub, PyPI.
:fire: 2D and 3D Face alignment library build using pytorch
Support
Support
Quality
Quality
Security
Security
License
License
Reuse
Reuse

kandi-support Support

  • face-alignment has a highly active ecosystem.
  • It has 4961 star(s) with 1066 fork(s). There are 162 watchers for this library.
  • It had no major release in the last 12 months.
  • There are 34 open issues and 204 have been closed. On average issues are closed in 31 days. There are 5 open pull requests and 0 closed requests.
  • It has a positive sentiment in the developer community.
  • The latest version of face-alignment is v1.3.4
face-alignment Support
Best in #Computer Vision
Average in #Computer Vision
face-alignment Support
Best in #Computer Vision
Average in #Computer Vision

quality kandi Quality

  • face-alignment has 0 bugs and 0 code smells.
face-alignment Quality
Best in #Computer Vision
Average in #Computer Vision
face-alignment Quality
Best in #Computer Vision
Average in #Computer Vision

securitySecurity

  • face-alignment has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
  • face-alignment code analysis shows 0 unresolved vulnerabilities.
  • There are 0 security hotspots that need review.
face-alignment Security
Best in #Computer Vision
Average in #Computer Vision
face-alignment Security
Best in #Computer Vision
Average in #Computer Vision

license License

  • face-alignment is licensed under the BSD-3-Clause License. This license is Permissive.
  • Permissive licenses have the least restrictions, and you can use them in most projects.
face-alignment License
Best in #Computer Vision
Average in #Computer Vision
face-alignment License
Best in #Computer Vision
Average in #Computer Vision

buildReuse

  • face-alignment releases are available to install and integrate.
  • Deployable package is available in PyPI.
  • Build file is available. You can build the component from source.
  • Installation instructions, examples and code snippets are available.
  • face-alignment saves you 623 person hours of effort in developing the same functionality from scratch.
  • It has 1348 lines of code, 97 functions and 24 files.
  • It has high code complexity. Code complexity directly impacts maintainability of the code.
face-alignment Reuse
Best in #Computer Vision
Average in #Computer Vision
face-alignment Reuse
Best in #Computer Vision
Average in #Computer Vision
Top functions reviewed by kandi - BETA

kandi has reviewed face-alignment and discovered the below as its top functions. This is intended to give you an instant insight into face-alignment implemented functionality, and help decide if they suit your requirements.

  • Get landmarks from an image .
  • Given a list of detections return a weighted list of detections .
  • Run the face detection .
  • Calculate the predictions from the given hm .
  • Crops an image around a given center point .
  • Transform a point .
  • Transform a numpy array .
  • Resize an image .
  • Load a file from a URL .
  • Batch detection .

face-alignment Key Features

By default the package will use the SFD face detector. However the users can alternatively use dlib, BlazeFace, or pre-existing ground truth bounding boxes.

Features

copy iconCopydownload iconDownload
import face_alignment
from skimage import io

fa = face_alignment.FaceAlignment(face_alignment.LandmarksType._2D, flip_input=False)

input = io.imread('../test/assets/aflw-test.jpg')
preds = fa.get_landmarks(input)

From source

copy iconCopydownload iconDownload
git clone https://github.com/1adrianb/face-alignment

Docker image

copy iconCopydownload iconDownload
docker build -t face-alignment .

Citation

copy iconCopydownload iconDownload
@inproceedings{bulat2017far,
  title={How far are we from solving the 2D \& 3D Face Alignment problem? (and a dataset of 230,000 3D facial landmarks)},
  author={Bulat, Adrian and Tzimiropoulos, Georgios},
  booktitle={International Conference on Computer Vision},
  year={2017}
}

Community Discussions

Trending Discussions on face-alignment
  • Why does Torch use ~700mb of GPU memory when predicting with a 1.5mb network
Trending Discussions on face-alignment

QUESTION

Why does Torch use ~700mb of GPU memory when predicting with a 1.5mb network

Asked 2019-Apr-13 at 14:19

I am very new to Torch/CUDA, and I'm trying to test the small binary network (~1.5mb) from https://github.com/1adrianb/binary-face-alignment, but I keep running into 'out of memory' issues.

I am using a relatively weak GPU (NVIDIA Quadro K600) with ~900Mb of graphics memory on 16.04 Ubuntu with CUDA 10.0 and CudNN version 5.1. So I don't really care about performance, but I thought I would at least be able to run a small network for prediction, one image at a time (especially one that supposedly is aimed at those "with Limited Resources").

I managed to run the code in headless mode and checked the memory consumption to be around 700Mb, which would explain why it fails immediately when I have an X-server running which takes around 250Mb of GPU memory.

I also added some logs to see how far along main.lua I get, and it's the call output:copy(model:forward(img)) on the very first image that runs out of memory.

For reference, here's the main.lua code up until the crash:

    require 'torch'
    require 'nn'
    require 'cudnn'
    require 'paths'

    require 'bnn'
    require 'optim'

    require 'gnuplot'
    require 'image'
    require 'xlua'
    local utils = require 'utils'
    local opts = require('opts')(arg)

    print("Starting heap tracking")
    torch.setheaptracking(true)

    torch.setdefaulttensortype('torch.FloatTensor')
    torch.setnumthreads(1)
    -- torch.

    local model
    if opts.dataset == 'AFLWPIFA' then
        print('Not available for the moment. Support will be added soon')
        os.exit()
        model = torch.load('models/facealignment_binary_pifa.t7')
    else
        print("Loading model")
        model = torch.load('models/facealignment_binary_aflw.t7')
    end
    model:evaluate()

    local fileLists = utils.getFileList(opts)
    local predictions = {}
    local noPoints = 68
    if opts.dataset == 'AFLWPIFA' then noPoints = 34; end
    local output = torch.CudaTensor(1,noPoints,64,64)
    for i = 1, #fileLists do

        local img = image.load(fileLists[i].image)
        local originalSize = img:size()

        img = utils.crop(img, fileLists[i].center, fileLists[i].scale, 256)
        img = img:cuda():view(1,3,256,256)
        output:copy(model:forward(img))

So I have two major questions:

  1. What tools are there for debugging memory usage in torch?
  2. What are the plausible causes of this memory bloat?

It must be something more than just the network and the images that are loaded into the GPU. My best guess is that it's related to the LoadFileLists function, but I simply don't know enough torch or lua to go much further from there. Other answers indicate there really isn't support for showing how much memory a variable is taking.

ANSWER

Answered 2019-Apr-11 at 20:18

What usually consumes most of the memory are the activation maps (and gradients, when training). I am not familiar with this particular model and implementation, but I would say that you are using a "fake" binary network; by fake I mean they still use floating-point numbers to represent the binary values since most users are going to use their code on GPUs that do not fully support real binary operations. The authors even write in Section 5:

Performance. In theory, by replacing all floating-point multiplications with bitwise XOR and making use of the SWAR (Single instruction, multiple data within a register) [5], [6], the number of operations can be reduced up to 32x when compared against the multiplication-based convolution. However, in our tests, we observed speedups of up to 3.5x, when compared against cuBLAS, for matrix multiplications, a result being in accordance with those reported in [6]. We note that we did not conduct experiments on CPUs. However, given the fact that we used the same method for binarization as in [5], similar improvements in terms of speed, of the order of 58x, are to be expected: as the realvalued network takes 0.67 seconds to do a forward pass on a i7-3820 using a single core, a speedup close to x58 will allow the system to run in real-time. In terms of memory compression, by removing the biases, which have minimum impact (or no impact at all) on performance, and by grouping and storing every 32 weights in one variable, we can achieve a compression rate of 39x when compared against the single precision counterpart of Torch.

In this context, a small model (w.r.t. number of parameters or model size in MiB) does not necessarily mean low memory footprint. It is likely that all this memory is being used to store the activation maps in single- or double-precision.

Source https://stackoverflow.com/questions/55636577

Community Discussions, Code Snippets contain sources that include Stack Exchange Network

Vulnerabilities

No vulnerabilities reported

Install face-alignment

The easiest way to install it is using either pip or conda:. Alternatively, bellow, you can find instruction to build it from source.

Support

All contributions are welcomed. If you encounter any issue (including examples of images where it fails) feel free to open an issue. If you plan to add a new features please open an issue to discuss this prior to making a pull request.

DOWNLOAD this Library from

Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases

Save this library and start creating your kit

Share this Page

share link
Reuse Pre-built Kits with face-alignment
Compare Computer Vision Libraries with Highest Support
Compare Computer Vision Libraries with Highest Security
Compare Computer Vision Libraries with Permissive License
Compare Computer Vision Libraries with Highest Reuse
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases

Save this library and start creating your kit

  • © 2022 Open Weaver Inc.