kandi background
Explore Kits

face.evoLVe.PyTorch | HighPerformance Face Recognition Library on PyTorch | Computer Vision library

 by   ZhaoJ9014 Python Version: Current License: MIT

 by   ZhaoJ9014 Python Version: Current License: MIT

Download this library from

kandi X-RAY | face.evoLVe.PyTorch Summary

face.evoLVe.PyTorch is a Python library typically used in Artificial Intelligence, Computer Vision, Deep Learning, Pytorch, Tensorflow applications. face.evoLVe.PyTorch has no bugs, it has no vulnerabilities, it has a Permissive License and it has medium support. However face.evoLVe.PyTorch build file is not available. You can download it from GitHub.
:white_check_mark: CLOSED 04 July 2019: ~~We will share several publicly available datasets on face anti-spoofing/liveness detection to facilitate related research and analytics.~~. :white_check_mark: CLOSED 07 June 2019: ~~We are training a better-performing IR-152 model on MS-Celeb-1M_Align_112x112, and will release the model soon.~~. :white_check_mark: CLOSED 23 May 2019: ~~We share three publicly available datasets to facilitate research on heterogeneous face recognition and analytics. Please refer to Sec. Data Zoo for details.~~. :white_check_mark: CLOSED 23 Jan 2019: ~~We share the name lists and pair-wise overlapping lists of several widely-used face recognition datasets to help researchers/engineers quickly remove the overlapping parts between their own private datasets and the public datasets. Please refer to Sec. Data Zoo for details.~~. :white_check_mark: CLOSED 23 Jan 2019: ~~The current distributed training schema with multi-GPUs under PyTorch and other mainstream platforms parallels the backbone across multi-GPUs while relying on a single master to compute the final bottleneck (fully-connected/softmax) layer. This is not an issue for conventional face recognition with moderate number of identities. However, it struggles with large-scale face recognition, which requires recognizing millions of identities in the real world. The master can hardly hold the oversized final layer while the slaves still have redundant computation resource, leading to small-batch training or even failed training. To address this problem, we are developing a highly-elegant, effective and efficient distributed training schema with multi-GPUs under PyTorch, supporting not only the backbone, but also the head with the fully-connected (softmax) layer, to facilitate high-performance large-scale face recognition. We will added this support into our repo.~~. :white_check_mark: CLOSED 22 Jan 2019: ~~We have released two feature extraction APIs for extracting features from pre-trained models, implemented with PyTorch build-in functions and OpenCV, respectively. Please check ./util/extract_feature_v1.py and ./util/extract_feature_v2.py.~~. :white_check_mark: CLOSED 22 Jan 2019: ~~We are fine-tuning our released IR-50 model on our private Asia face data, which will be released soon to facilitate high-performance Asia face recognition.~~. :white_check_mark: CLOSED 21 Jan 2019: ~~We are training a better-performing IR-50 model on MS-Celeb-1M_Align_112x112, and will replace the current model soon.~~.
Support
Support
Quality
Quality
Security
Security
License
License
Reuse
Reuse

kandi-support Support

  • face.evoLVe.PyTorch has a medium active ecosystem.
  • It has 2420 star(s) with 618 fork(s). There are 109 watchers for this library.
  • It had no major release in the last 12 months.
  • There are 56 open issues and 88 have been closed. On average issues are closed in 12 days. There are 1 open pull requests and 0 closed requests.
  • It has a neutral sentiment in the developer community.
  • The latest version of face.evoLVe.PyTorch is current.
face.evoLVe.PyTorch Support
Best in #Computer Vision
Average in #Computer Vision
face.evoLVe.PyTorch Support
Best in #Computer Vision
Average in #Computer Vision

quality kandi Quality

  • face.evoLVe.PyTorch has 0 bugs and 0 code smells.
face.evoLVe.PyTorch Quality
Best in #Computer Vision
Average in #Computer Vision
face.evoLVe.PyTorch Quality
Best in #Computer Vision
Average in #Computer Vision

securitySecurity

  • face.evoLVe.PyTorch has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
  • face.evoLVe.PyTorch code analysis shows 0 unresolved vulnerabilities.
  • There are 0 security hotspots that need review.
face.evoLVe.PyTorch Security
Best in #Computer Vision
Average in #Computer Vision
face.evoLVe.PyTorch Security
Best in #Computer Vision
Average in #Computer Vision

license License

  • face.evoLVe.PyTorch is licensed under the MIT License. This license is Permissive.
  • Permissive licenses have the least restrictions, and you can use them in most projects.
face.evoLVe.PyTorch License
Best in #Computer Vision
Average in #Computer Vision
face.evoLVe.PyTorch License
Best in #Computer Vision
Average in #Computer Vision

buildReuse

  • face.evoLVe.PyTorch releases are not available. You will need to build from source code and install.
  • face.evoLVe.PyTorch has no build file. You will be need to create the build yourself to build the component from source.
  • Installation instructions are not available. Examples and code snippets are available.
  • face.evoLVe.PyTorch saves you 1081 person hours of effort in developing the same functionality from scratch.
  • It has 6300 lines of code, 385 functions and 67 files.
  • It has medium code complexity. Code complexity directly impacts maintainability of the code.
face.evoLVe.PyTorch Reuse
Best in #Computer Vision
Average in #Computer Vision
face.evoLVe.PyTorch Reuse
Best in #Computer Vision
Average in #Computer Vision
Top functions reviewed by kandi - BETA

Coming Soon for all Libraries!

Currently covering the most popular Java, JavaScript and Python libraries. See a SAMPLE HERE.
kandi's functional review helps you automatically verify the functionalities of the libraries and avoid rework.

face.evoLVe.PyTorch Key Features

🔥🔥High-Performance Face Recognition Library on PyTorch🔥🔥

Usage

copy iconCopydownload iconDownload
./data/db_name/
        -> id1/
            -> 1.jpg
            -> ...
        -> id2/
            -> 1.jpg
            -> ...
        -> ...
            -> ...
            -> ...

Face Alignment

copy iconCopydownload iconDownload
from PIL import Image
from detector import detect_faces
from visualization_utils import show_results

img = Image.open('some_img.jpg') # modify the image path to yours
bounding_boxes, landmarks = detect_faces(img) # detect bboxes and landmarks for all faces in the image
show_results(img, bounding_boxes, landmarks) # visualize the results

Data Processing

copy iconCopydownload iconDownload
python remove_lowshot.py -root [root] -min_num [min_num]

# python remove_lowshot.py -root './data/train' -min_num 10

Training and Validation

copy iconCopydownload iconDownload
import torch

configurations = {
    1: dict(
        SEED = 1337, # random seed for reproduce results

        DATA_ROOT = '/media/pc/6T/jasonjzhao/data/faces_emore', # the parent root where your train/val/test data are stored
        MODEL_ROOT = '/media/pc/6T/jasonjzhao/buffer/model', # the root to buffer your checkpoints
        LOG_ROOT = '/media/pc/6T/jasonjzhao/buffer/log', # the root to log your train/val status
        BACKBONE_RESUME_ROOT = './', # the root to resume training from a saved checkpoint
        HEAD_RESUME_ROOT = './', # the root to resume training from a saved checkpoint

        BACKBONE_NAME = 'IR_SE_50', # support: ['ResNet_50', 'ResNet_101', 'ResNet_152', 'IR_50', 'IR_101', 'IR_152', 'IR_SE_50', 'IR_SE_101', 'IR_SE_152']
        HEAD_NAME = 'ArcFace', # support:  ['Softmax', 'ArcFace', 'CosFace', 'SphereFace', 'Am_softmax']
        LOSS_NAME = 'Focal', # support: ['Focal', 'Softmax']

        INPUT_SIZE = [112, 112], # support: [112, 112] and [224, 224]
        RGB_MEAN = [0.5, 0.5, 0.5], # for normalize inputs to [-1, 1]
        RGB_STD = [0.5, 0.5, 0.5],
        EMBEDDING_SIZE = 512, # feature dimension
        BATCH_SIZE = 512,
        DROP_LAST = True, # whether drop the last batch to ensure consistent batch_norm statistics
        LR = 0.1, # initial LR
        NUM_EPOCH = 125, # total epoch number (use the firt 1/25 epochs to warm up)
        WEIGHT_DECAY = 5e-4, # do not apply to batch_norm parameters
        MOMENTUM = 0.9,
        STAGES = [35, 65, 95], # epoch stages to decay learning rate

        DEVICE = torch.device("cuda:0" if torch.cuda.is_available() else "cpu"),
        MULTI_GPU = True, # flag to use multiple GPUs; if you choose to train with single GPU, you should first run "export CUDA_VISILE_DEVICES=device_id" to specify the GPU card you want to use
        GPU_ID = [0, 1, 2, 3], # specify your GPU ids
        PIN_MEMORY = True,
        NUM_WORKERS = 0,
),
}

Data Zoo

copy iconCopydownload iconDownload
unzip casia-maxpy-clean.zip    
cd casia-maxpy-clean    
zip -F CASIA-maxpy-clean.zip --out CASIA-maxpy-clean_fix.zip    
unzip CASIA-maxpy-clean_fix.zip

Model Zoo

copy iconCopydownload iconDownload
INPUT_SIZE: [112, 112]; RGB_MEAN: [0.5, 0.5, 0.5]; RGB_STD: [0.5, 0.5, 0.5]; BATCH_SIZE: 512 (drop the last batch to ensure consistent batch_norm statistics); Initial LR: 0.1; NUM_EPOCH: 120; WEIGHT_DECAY: 5e-4 (do not apply to batch_norm parameters); MOMENTUM: 0.9; STAGES: [30, 60, 90]; Augmentation: Random Crop + Horizontal Flip; Imbalanced Data Processing: Weighted Random Sampling; Solver: SGD; GPUs: 4 NVIDIA Tesla P40 in Parallel

Citation

copy iconCopydownload iconDownload
@article{zhao2020towards,
title={Towards age-invariant face recognition},
author={Zhao, Jian and Yan, Shuicheng and Feng, Jiashi},
journal={T-PAMI},
year={2020}
}


@article{liang2020fine,
title={Fine-grained facial expression recognition in the wild},
author={Liang, Liqian and Lang, Congyan and Li, Yidong and Feng, Songhe and Zhao, Jian},
journal={T-IFS},
pages={482--494},
year={2020}
}


@article{tu2020learning,
title={Learning generalizable and identity-discriminative representations for face anti-spoofing},
author={Tu, Xiaoguang and Ma, Zheng and Zhao, Jian and Du, Guodong and Xie, Mei and Feng, Jiashi},
journal={T-IST},
pages={1--19},
year={2020}
}


@article{tu20203d,
title={3D face reconstruction from a single image assisted by 2D face images in the wild},
author={Tu, Xiaoguang and Zhao, Jian and Xie, Mei and Jiang, Zihang and Balamurugan, Akshaya and Luo, Yao and Zhao, Yang and He, Lingxiao and Ma, Zheng and Feng, Jiashi},
journal={T-MM},
year={2020}
}


@inproceedings{wang2020learning,
title={Learning to Detect Head Movement in Unconstrained Remote Gaze Estimation in the Wild},
author={Wang, Zhecan and Zhao, Jian and Lu, Cheng and Yang, Fan and Huang, Han and Guo, Yandong and others},
booktitle={WACV},
pages={3443--3452},
year={2020}
}


@article{zhao2019recognizing,
title={Recognizing Profile Faces by Imagining Frontal View},
author={Zhao, Jian and Xing, Junliang and Xiong, Lin and Yan, Shuicheng and Feng, Jiashi},
journal={IJCV},
pages={1--19},
year={2019}
}


@article{kong2019cross,
title={Cross-Resolution Face Recognition via Prior-Aided Face Hallucination and Residual Knowledge Distillation},
author={Kong, Hanyang and Zhao, Jian and Tu, Xiaoguang and Xing, Junliang and Shen, Shengmei and Feng, Jiashi},
journal={arXiv preprint arXiv:1905.10777},
year={2019}
}


@article{tu2019joint,
title={Joint 3D face reconstruction and dense face alignment from a single image with 2D-assisted self-supervised learning},
author={Tu, Xiaoguang and Zhao, Jian and Jiang, Zihang and Luo, Yao and Xie, Mei and Zhao, Yang and He, Linxiao and Ma, Zheng and Feng, Jiashi},
journal={arXiv preprint arXiv:1903.09359},
year={2019}
}     


@inproceedings{zhao2019multi,
title={Multi-Prototype Networks for Unconstrained Set-based Face Recognition},
author={Zhao, Jian and Li, Jianshu and Tu, Xiaoguang and Zhao, Fang and Xin, Yuan and Xing, Junliang and Liu, Hengzhu and Yan, Shuicheng and Feng, Jiashi},
booktitle={IJCAI},
year={2019}
}


@inproceedings{zhao2019look,
title={Look Across Elapse: Disentangled Representation Learning and Photorealistic Cross-Age Face Synthesis for Age-Invariant Face Recognition},
author={Zhao, Jian and Cheng, Yu and Cheng, Yi and Yang, Yang and Lan, Haochong and Zhao, Fang and Xiong, Lin and Xu, Yan and Li, Jianshu and Pranata, Sugiri and others},
booktitle={AAAI},
year={2019}
}


@article{tu2019joint,
title={Joint 3D Face Reconstruction and Dense Face Alignment from A Single Image with 2D-Assisted Self-Supervised Learning},
author={Tu, Xiaoguang and Zhao, Jian and Jiang, Zihang and Luo, Yao and Xie, Mei and Zhao, Yang and He, Linxiao and Ma, Zheng and Feng, Jiashi},
journal={arXiv preprint arXiv:1903.09359},
year={2019}
}


@article{tu2019learning,
title={Learning Generalizable and Identity-Discriminative Representations for Face Anti-Spoofing},
author={Tu, Xiaoguang and Zhao, Jian and Xie, Mei and Du, Guodong and Zhang, Hengsheng and Li, Jianshu and Ma, Zheng and Feng, Jiashi},
journal={arXiv preprint arXiv:1901.05602},
year={2019}
}


@article{zhao20183d,
title={3D-Aided Dual-Agent GANs for Unconstrained Face Recognition},
author={Zhao, Jian and Xiong, Lin and Li, Jianshu and Xing, Junliang and Yan, Shuicheng and Feng, Jiashi},
journal={T-PAMI},
year={2018}
}


@inproceedings{zhao2018towards,
title={Towards Pose Invariant Face Recognition in the Wild},
author={Zhao, Jian and Cheng, Yu and Xu, Yan and Xiong, Lin and Li, Jianshu and Zhao, Fang and Jayashree, Karlekar and Pranata,         Sugiri and Shen, Shengmei and Xing, Junliang and others},
booktitle={CVPR},
pages={2207--2216},
year={2018}
}


@inproceedings{zhao3d,
title={3D-Aided Deep Pose-Invariant Face Recognition},
author={Zhao, Jian and Xiong, Lin and Cheng, Yu and Cheng, Yi and Li, Jianshu and Zhou, Li and Xu, Yan and Karlekar, Jayashree and       Pranata, Sugiri and Shen, Shengmei and others},
booktitle={IJCAI},
pages={1184--1190},
year={2018}
}


@inproceedings{zhao2018dynamic,
title={Dynamic Conditional Networks for Few-Shot Learning},
author={Zhao, Fang and Zhao, Jian and Yan, Shuicheng and Feng, Jiashi},
booktitle={ECCV},
pages={19--35},
year={2018}
}


@inproceedings{zhao2017dual,
title={Dual-agent gans for photorealistic and identity preserving profile face synthesis},
author={Zhao, Jian and Xiong, Lin and Jayashree, Panasonic Karlekar and Li, Jianshu and Zhao, Fang and Wang, Zhecan and Pranata,           Panasonic Sugiri and Shen, Panasonic Shengmei and Yan, Shuicheng and Feng, Jiashi},
booktitle={NeurIPS},
pages={66--76},
year={2017}
}


@inproceedings{zhao122017marginalized,
title={Marginalized cnn: Learning deep invariant representations},
author={Zhao12, Jian and Li, Jianshu and Zhao, Fang and Yan13, Shuicheng and Feng, Jiashi},
booktitle={BMVC},
year={2017}
}


@inproceedings{cheng2017know,
title={Know you at one glance: A compact vector representation for low-shot learning},
author={Cheng, Yu and Zhao, Jian and Wang, Zhecan and Xu, Yan and Jayashree, Karlekar and Shen, Shengmei and Feng, Jiashi},
booktitle={ICCVW},
pages={1924--1932},
year={2017}
}


@inproceedings{xu2017high,
title={High performance large scale face recognition with multi-cognition softmax and feature retrieval},
author={Xu, Yan and Cheng, Yu and Zhao, Jian and Wang, Zhecan and Xiong, Lin and Jayashree, Karlekar and Tamura, Hajime and Kagaya, Tomoyuki and Shen, Shengmei and Pranata, Sugiri and others},
booktitle={ICCVW},
pages={1898--1906},
year={2017}
}


@inproceedings{wangconditional,
title={Conditional Dual-Agent GANs for Photorealistic and Annotation Preserving Image Synthesis},
author={Wang, Zhecan and Zhao, Jian and Cheng, Yu and Xiao, Shengtao and Li, Jianshu and Zhao, Fang and Feng, Jiashi and Kassim, Ashraf},
booktitle={BMVCW},
}


@inproceedings{li2017integrated,
title={Integrated face analytics networks through cross-dataset hybrid training},
author={Li, Jianshu and Xiao, Shengtao and Zhao, Fang and Zhao, Jian and Li, Jianan and Feng, Jiashi and Yan, Shuicheng and Sim, Terence},
booktitle={ACM MM},
pages={1531--1539},
year={2017}
}


@article{xiong2017good,
title={A good practice towards top performance of face recognition: Transferred deep feature fusion},
author={Xiong, Lin and Karlekar, Jayashree and Zhao, Jian and Cheng, Yi and Xu, Yan and Feng, Jiashi and Pranata, Sugiri and Shen, Shengmei},
journal={arXiv preprint arXiv:1704.00438},
year={2017}
}


@article{zhao2017robust,
title={Robust lstm-autoencoders for face de-occlusion in the wild},
author={Zhao, Fang and Feng, Jiashi and Zhao, Jian and Yang, Wenhan and Yan, Shuicheng},
journal={T-IP},
volume={27},
number={2},
pages={778--790},
year={2017}
}


@inproceedings{li2016robust,
title={Robust face recognition with deep multi-view representation learning},
author={Li, Jianshu and Zhao, Jian and Zhao, Fang and Liu, Hao and Li, Jing and Shen, Shengmei and Feng, Jiashi and Sim, Terence},
booktitle={ACM MM},
pages={1068--1072},
year={2016}
}

Community Discussions

Trending Discussions on Computer Vision
  • Image similarity in swift
  • When using pandas_profiling: "ModuleNotFoundError: No module named 'visions.application'"
  • Classify handwritten text using Google Cloud Vision
  • cv2 findChessboardCorners does not detect corners
  • Fastest way to get the RGB average inside of a non-rectangular contour in the CMSampleBuffer
  • UIViewController can't override method from it's superclass
  • X and Y-axis swapped in Vision Framework Swift
  • Swift's Vision framework not recognizing Japanese characters
  • Boxing large objects in image containing both large and small objects of similar color and in high density from a picture
  • Create a LabVIEW IMAQ image from a binary buffer/file with and without NI Vision
Trending Discussions on Computer Vision

QUESTION

Image similarity in swift

Asked 2022-Mar-25 at 11:42

The swift vision similarity feature is able to assign a number to the variance between 2 images. Where 0 variance between the images, means the images are the same. As the number increases this that there is more and more variance between the images.

What I am trying to do is turn this into a percentage of similarity. So one image is for example 80% similar to the other image. Any ideas how I could arrange the logic to accomplish this:

import UIKit
import Vision
func featureprintObservationForImage(atURL url: URL) -> VNFeaturePrintObservation? {
let requestHandler = VNImageRequestHandler(url: url, options: [:])
let request = VNGenerateImageFeaturePrintRequest()
do {
  try requestHandler.perform([request])
  return request.results?.first as? VNFeaturePrintObservation
} catch {
  print("Vision error: \(error)")
  return nil
}
  }
 let apple1 = featureprintObservationForImage(atURL: Bundle.main.url(forResource:"apple1", withExtension: "jpg")!)
let apple2 = featureprintObservationForImage(atURL: Bundle.main.url(forResource:"apple2", withExtension: "jpg")!)
let pear = featureprintObservationForImage(atURL: Bundle.main.url(forResource:"pear", withExtension: "jpg")!)
var distance = Float(0)
try apple1!.computeDistance(&distance, to: apple2!)
var distance2 = Float(0)
try apple1!.computeDistance(&distance2, to: pear!)

ANSWER

Answered 2022-Mar-25 at 10:26

It depends on how you want to scale it. If you just want the percentage you could just use Float.greatestFiniteMagnitude as the maximum value.

1-(distance/Float.greatestFiniteMagnitude)*100

A better solution would probably be to set a lower ceiling and everything above that ceiling would just be 0% similarity.

1-(min(distance, 10)/10)*100

Here the artificial ceiling would be 10, but it can be any arbitrary number.

Source https://stackoverflow.com/questions/71615277

Community Discussions, Code Snippets contain sources that include Stack Exchange Network

Vulnerabilities

No vulnerabilities reported

Install face.evoLVe.PyTorch

You can download it from GitHub.
You can use face.evoLVe.PyTorch like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.

Support

For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .

DOWNLOAD this Library from

Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases

Save this library and start creating your kit

Share this Page

share link
Reuse Pre-built Kits with face.evoLVe.PyTorch
Compare Computer Vision Libraries with Highest Support
Compare Computer Vision Libraries with Highest Security
Compare Computer Vision Libraries with Permissive License
Compare Computer Vision Libraries with Highest Reuse
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
over 430 million Knowledge Items
Find more libraries
Reuse Solution Kits and Libraries Curated by Popular Use Cases

Save this library and start creating your kit

  • © 2022 Open Weaver Inc.