Face_Pytorch | face recognition algorithms in pytorch framework, including arcface, cosface, sphereface and so on | Computer Vision library
kandi X-RAY | Face_Pytorch Summary
kandi X-RAY | Face_Pytorch Summary
The implementation of popular face recognition algorithms in pytorch framework, including arcface, cosface and sphereface and so on. All codes are evaluated on Pytorch 0.4.0 with Python 3.6, Ubuntu 16.04.10, CUDA 9.1 and CUDNN 7.1. Partially evaluated on Pytorch 1.0.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
- Train the model
- Extract features from torch txt
- Calculates the coefficient of 10 fold
- Plot loss curves
- Calculate the threshold for the given scores
- Calculate accuracy
- Extract a feature
- Write a matplotlib matrix to a file
- Load a BAM model
- Extract features from torch Tensor
- Calculate the accuracy of 10 folds
- Return a list of all the blocks
- Load an MXNet record
- Read the names from a file
- Create a single layer
- Plot the loss curves
- Load a mobile face network
- Returns a DataLoader for a dataset
- Load a MAT file
- Read a matrix from file
Face_Pytorch Key Features
Face_Pytorch Examples and Code Snippets
Community Discussions
Trending Discussions on Face_Pytorch
QUESTION
I'm trying to train Arcface with reference to.
As far as I know, Arcface requires more than 200 training epochs on CASIA-webface with a large batch size.
Within 100 epochs of training, I stopped the training for a while because I was needed to use GPU for other tasks. And the checkpoints of the model(Resnet) and margin are saved. Before it was stopped, its loss recorded a value between 0.3~1.0, and training accuracy was mount to 80~95%.
When I resume the Arcface training by loading the checkpoint files using load_sate, it seems normal when the first batch is processed. But suddenly the loss increased sharply and the accuracy became very low.
Its loss suddenly became increased. How did this happen? I had no other way so anyway continued the training, but I don't think the loss is decreasing well even though it is a trained model over 100 epochs...
When I searched for similar issues, they told the problem was that the optimizer was not saved (Because the reference github page didn't save the optimizer, so did I. Is it true?
...ANSWER
Answered 2020-Oct-27 at 17:03if you see this line!
you are Decaying the learning rate of each parameter group by gamma.
This has altered your learning rate as you had reached 100th epoch. and moreover you had not saved your optimizer state while saving your model.
This made your code to start with the starting lr i.e 0.1 after resuming your training.
And this spiked your loss again.
Vote if you find this useful
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install Face_Pytorch
You can use Face_Pytorch like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page