kandi background
Explore Kits

3DDFA | PyTorch improved version of TPAMI 2017 paper: Face Alignment | Machine Learning library

 by   cleardusk Python Version: v0.1 License: MIT

 by   cleardusk Python Version: v0.1 License: MIT

Download this library from

kandi X-RAY | 3DDFA Summary

3DDFA is a Python library typically used in Institutions, Learning, Education, Artificial Intelligence, Machine Learning, Deep Learning, Pytorch applications. 3DDFA has no bugs, it has no vulnerabilities, it has build file available, it has a Permissive License and it has medium support. You can download it from GitHub.
This repo holds the pytorch improved version of the paper: Face Alignment in Full Pose Range: A 3D Total Solution. Several works beyond the original paper are added, including the real-time training, training strategies. Therefore, this repo is an improved version of the original work. As far, this repo releases the pre-trained first-stage pytorch models of MobileNet-V1 structure, the pre-processed training&testing dataset and codebase. Note that the inference time is about 0.27ms per image (input batch with 128 images as an input batch) on GeForce GTX TITAN X.
Support
Support
Quality
Quality
Security
Security
License
License
Reuse
Reuse

kandi-support Support

  • 3DDFA has a medium active ecosystem.
  • It has 3012 star(s) with 585 fork(s). There are 123 watchers for this library.
  • It had no major release in the last 12 months.
  • There are 44 open issues and 149 have been closed. On average issues are closed in 36 days. There are 1 open pull requests and 0 closed requests.
  • It has a neutral sentiment in the developer community.
  • The latest version of 3DDFA is v0.1
3DDFA Support
Best in #Machine Learning
Average in #Machine Learning
3DDFA Support
Best in #Machine Learning
Average in #Machine Learning

quality kandi Quality

  • 3DDFA has 0 bugs and 0 code smells.
3DDFA Quality
Best in #Machine Learning
Average in #Machine Learning
3DDFA Quality
Best in #Machine Learning
Average in #Machine Learning

securitySecurity

  • 3DDFA has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
  • 3DDFA code analysis shows 0 unresolved vulnerabilities.
  • There are 0 security hotspots that need review.
3DDFA Security
Best in #Machine Learning
Average in #Machine Learning
3DDFA Security
Best in #Machine Learning
Average in #Machine Learning

license License

  • 3DDFA is licensed under the MIT License. This license is Permissive.
  • Permissive licenses have the least restrictions, and you can use them in most projects.
3DDFA License
Best in #Machine Learning
Average in #Machine Learning
3DDFA License
Best in #Machine Learning
Average in #Machine Learning

buildReuse

  • 3DDFA releases are available to install and integrate.
  • Build file is available. You can build the component from source.
  • Installation instructions are not available. Examples and code snippets are available.
  • 3DDFA saves you 951 person hours of effort in developing the same functionality from scratch.
  • It has 2167 lines of code, 144 functions and 29 files.
  • It has high code complexity. Code complexity directly impacts maintainability of the code.
3DDFA Reuse
Best in #Machine Learning
Average in #Machine Learning
3DDFA Reuse
Best in #Machine Learning
Average in #Machine Learning
Top functions reviewed by kandi - BETA

kandi has reviewed 3DDFA and discovered the below as its top functions. This is intended to give you an instant insight into 3DDFA implemented functionality, and help decide if they suit your requirements.

  • Calculate the weights for resampling .
  • Parse command line arguments .
  • Load a BBM model .
  • Initialization for DepthWiseBlock .
  • calculate the Nmean nme
  • Draw a set of points .
  • Train the model .
  • Plots a box on an image .
  • Render a 3D image .
  • Check if a point is in a triangle .

3DDFA Key Features

The PyTorch improved version of TPAMI 2017 paper: Face Alignment in Full Pose Range: A 3D Total Solution.

Requirements

copy iconCopydownload iconDownload
# installation structions
sudo pip3 install torch torchvision # for cpu version. more option to see https://pytorch.org
sudo pip3 install numpy scipy matplotlib
sudo pip3 install dlib==19.5.0 # 19.15+ version may cause conflict with pytorch in Linux, this may take several minutes. If 19.5 version raises errors, you may try 19.15+ version.
sudo pip3 install opencv-python
sudo pip3 install cython

Usage

copy iconCopydownload iconDownload
git clone https://github.com/cleardusk/3DDFA.git  # or git@github.com:cleardusk/3DDFA.git
cd 3DDFA

CPU

copy iconCopydownload iconDownload
python3 speed_cpu.py

Training details

copy iconCopydownload iconDownload
#!/usr/bin/env bash

LOG_ALIAS=$1
LOG_DIR="logs"
mkdir -p ${LOG_DIR}

LOG_FILE="${LOG_DIR}/${LOG_ALIAS}_`date +'%Y-%m-%d_%H:%M.%S'`.log"
#echo $LOG_FILE

./train.py --arch="mobilenet_1" \
    --start-epoch=1 \
    --loss=wpdc \
    --snapshot="snapshot/phase1_wpdc" \
    --param-fp-train='../train.configs/param_all_norm.pkl' \
    --param-fp-val='../train.configs/param_all_norm_val.pkl' \
    --warmup=5 \
    --opt-style=resample \
    --resample-num=132 \
    --batch-size=512 \
    --base-lr=0.02 \
    --epochs=50 \
    --milestones=30,40 \
    --print-freq=50 \
    --devices-id=0,1 \
    --workers=8 \
    --filelists-train="../train.configs/train_aug_120x120.list.train" \
    --filelists-val="../train.configs/train_aug_120x120.list.val" \
    --root="/path/to//train_aug_120x120" \
    --log-file="${LOG_FILE}"

Evaluation

copy iconCopydownload iconDownload
python3 ./benchmark.py -c models/phase1_wpdc_vdc.pth.tar

Citation

copy iconCopydownload iconDownload
@misc{3ddfa_cleardusk,
  author =       {Guo, Jianzhu and Zhu, Xiangyu and Lei, Zhen},
  title =        {3DDFA},
  howpublished = {\url{https://github.com/cleardusk/3DDFA}},
  year =         {2018}
}

@inproceedings{guo2020towards,
  title=        {Towards Fast, Accurate and Stable 3D Dense Face Alignment},
  author=       {Guo, Jianzhu and Zhu, Xiangyu and Yang, Yang and Yang, Fan and Lei, Zhen and Li, Stan Z},
  booktitle=    {Proceedings of the European Conference on Computer Vision (ECCV)},
  year=         {2020}
}

@article{zhu2017face,
  title=      {Face alignment in full pose range: A 3d total solution},
  author=     {Zhu, Xiangyu and Liu, Xiaoming and Lei, Zhen and Li, Stan Z},
  journal=    {IEEE transactions on pattern analysis and machine intelligence},
  year=       {2017},
  publisher=  {IEEE}
}

Community Discussions

Trending Discussions on Machine Learning
  • Using RNN Trained Model without pytorch installed
  • Flux.jl : Customizing optimizer
  • How can I check a confusion_matrix after fine-tuning with custom datasets?
  • CUDA OOM - But the numbers don't add upp?
  • How to compare baseline and GridSearchCV results fair?
  • Getting Error 524 while running jupyter lab in google cloud platform
  • TypeError: brain.NeuralNetwork is not a constructor
  • Ordinal Encoding or One-Hot-Encoding
  • How to increase dimension-vector size of BERT sentence-transformers embedding
  • How to identify what features affect predictions result?
Trending Discussions on Machine Learning

QUESTION

Using RNN Trained Model without pytorch installed

Asked 2022-Feb-28 at 20:17

I have trained an RNN model with pytorch. I need to use the model for prediction in an environment where I'm unable to install pytorch because of some strange dependency issue with glibc. However, I can install numpy and scipy and other libraries. So, I want to use the trained model, with the network definition, without pytorch.

I have the weights of the model as I save the model with its state dict and weights in the standard way, but I can also save it using just json/pickle files or similar.

I also have the network definition, which depends on pytorch in a number of ways. This is my RNN network definition.

import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import random

torch.manual_seed(1)
random.seed(1)
device = torch.device('cpu')

class RNN(nn.Module):
  def __init__(self, input_size, hidden_size, output_size,num_layers, matching_in_out=False, batch_size=1):
    super(RNN, self).__init__()
    self.input_size = input_size
    self.hidden_size = hidden_size
    self.output_size = output_size
    self.num_layers = num_layers
    self.batch_size = batch_size
    self.matching_in_out = matching_in_out #length of input vector matches the length of output vector 
    self.lstm = nn.LSTM(input_size, hidden_size,num_layers)
    self.hidden2out = nn.Linear(hidden_size, output_size)
    self.hidden = self.init_hidden()
  def forward(self, feature_list):
    feature_list=torch.tensor(feature_list)
    
    if self.matching_in_out:
      lstm_out, _ = self.lstm( feature_list.view(len( feature_list), 1, -1))
      output_space = self.hidden2out(lstm_out.view(len( feature_list), -1))
      output_scores = torch.sigmoid(output_space) #we'll need to check if we need this sigmoid
      return output_scores #output_scores
    else:
      for i in range(len(feature_list)):
        cur_ft_tensor=feature_list[i]#.view([1,1,self.input_size])
        cur_ft_tensor=cur_ft_tensor.view([1,1,self.input_size])
        lstm_out, self.hidden = self.lstm(cur_ft_tensor, self.hidden)
        outs=self.hidden2out(lstm_out)
      return outs
  def init_hidden(self):
    #return torch.rand(self.num_layers, self.batch_size, self.hidden_size)
    return (torch.rand(self.num_layers, self.batch_size, self.hidden_size).to(device),
            torch.rand(self.num_layers, self.batch_size, self.hidden_size).to(device))

I am aware of this question, but I'm willing to go as low level as possible. I can work with numpy array instead of tensors, and reshape instead of view, and I don't need a device setting.

Based on the class definition above, what I can see here is that I only need the following components from torch to get an output from the forward function:

  • nn.LSTM
  • nn.Linear
  • torch.sigmoid

I think I can easily implement the sigmoid function using numpy. However, can I have some implementation for the nn.LSTM and nn.Linear using something not involving pytorch? Also, how will I use the weights from the state dict into the new class?

So, the question is, how can I "translate" this RNN definition into a class that doesn't need pytorch, and how to use the state dict weights for it? Alternatively, is there a "light" version of pytorch, that I can use just to run the model and yield a result?

EDIT

I think it might be useful to include the numpy/scipy equivalent for both nn.LSTM and nn.linear. It would help us compare the numpy output to torch output for the same code, and give us some modular code/functions to use. Specifically, a numpy equivalent for the following would be great:

rnn = nn.LSTM(10, 20, 2)
input = torch.randn(5, 3, 10)
h0 = torch.randn(2, 3, 20)
c0 = torch.randn(2, 3, 20)
output, (hn, cn) = rnn(input, (h0, c0))

and also for linear:

m = nn.Linear(20, 30)
input = torch.randn(128, 20)
output = m(input)

ANSWER

Answered 2022-Feb-17 at 10:47

You should try to export the model using torch.onnx. The page gives you an example that you can start with.

An alternative is to use TorchScript, but that requires torch libraries.

Both of these can be run without python. You can load torchscript in a C++ application https://pytorch.org/tutorials/advanced/cpp_export.html

ONNX is much more portable and you can use in languages such as C#, Java, or Javascript https://onnxruntime.ai/ (even on the browser)

A running example

Just modifying a little your example to go over the errors I found

Notice that via tracing any if/elif/else, for, while will be unrolled

import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
import random

torch.manual_seed(1)
random.seed(1)
device = torch.device('cpu')

class RNN(nn.Module):
  def __init__(self, input_size, hidden_size, output_size,num_layers, matching_in_out=False, batch_size=1):
    super(RNN, self).__init__()
    self.input_size = input_size
    self.hidden_size = hidden_size
    self.output_size = output_size
    self.num_layers = num_layers
    self.batch_size = batch_size
    self.matching_in_out = matching_in_out #length of input vector matches the length of output vector 
    self.lstm = nn.LSTM(input_size, hidden_size,num_layers)
    self.hidden2out = nn.Linear(hidden_size, output_size)
  def forward(self, x, h0, c0):
    lstm_out, (hidden_a, hidden_b) = self.lstm(x, (h0, c0))
    outs=self.hidden2out(lstm_out)
    return outs, (hidden_a, hidden_b)
  def init_hidden(self):
    #return torch.rand(self.num_layers, self.batch_size, self.hidden_size)
    return (torch.rand(self.num_layers, self.batch_size, self.hidden_size).to(device).detach(),
            torch.rand(self.num_layers, self.batch_size, self.hidden_size).to(device).detach())

# convert the arguments passed during onnx.export call
class MWrapper(nn.Module):
    def __init__(self, model):
        super(MWrapper, self).__init__()
        self.model = model;
    def forward(self, kwargs):
        return self.model(**kwargs)

Run an example

rnn = RNN(10, 10, 10, 3)
X = torch.randn(3,1,10)
h0,c0  = rnn.init_hidden()
print(rnn(X, h0, c0)[0])

Use the same input to trace the model and export an onnx file


torch.onnx.export(MWrapper(rnn), {'x':X,'h0':h0,'c0':c0}, 'rnn.onnx', 
                  dynamic_axes={'x':{1:'N'},
                               'c0':{1: 'N'},
                               'h0':{1: 'N'}
                               },
                  input_names=['x', 'h0', 'c0'],
                  output_names=['y', 'hn', 'cn']
                 )

Notice that you can use symbolic values for the dimensions of some axes of some inputs. Unspecified dimensions will be fixed with the values from the traced inputs. By default LSTM uses dimension 1 as batch.

Next we load the ONNX model and pass the same inputs

import onnxruntime
ort_model = onnxruntime.InferenceSession('rnn.onnx')
print(ort_model.run(['y'], {'x':X.numpy(), 'c0':c0.numpy(), 'h0':h0.numpy()}))

Source https://stackoverflow.com/questions/71146140

Community Discussions, Code Snippets contain sources that include Stack Exchange Network

Vulnerabilities

No vulnerabilities reported

Install 3DDFA

You can download it from GitHub.
You can use 3DDFA like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.

Support

Jianzhu Guo (郭建珠) [Homepage, Google Scholar]: jianzhu.guo@nlpr.ia.ac.cn or guojianzhu1994@foxmail.com.

DOWNLOAD this Library from

Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases

Save this library and start creating your kit

Share this Page

share link
Reuse Pre-built Kits with 3DDFA
Compare Machine Learning Libraries with Highest Support
Compare Machine Learning Libraries with Highest Quality
Compare Machine Learning Libraries with Highest Security
Compare Machine Learning Libraries with Permissive License
Compare Machine Learning Libraries with Highest Reuse
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases

Save this library and start creating your kit

  • © 2022 Open Weaver Inc.