MobileFaceNet | 论文 : MobileFaceNets : Efficient CNNs | Computer Vision library
kandi X-RAY | MobileFaceNet Summary
kandi X-RAY | MobileFaceNet Summary
论文 : MobileFaceNets: Efficient CNNs for Accurate Real-time Face Verification on Mobile Devices 的AMsoftmaxLoss实现.
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of MobileFaceNet
MobileFaceNet Key Features
MobileFaceNet Examples and Code Snippets
Community Discussions
Trending Discussions on MobileFaceNet
QUESTION
I currently need to use a pretrained model by setting it on a specific cuda device. The pretrained model is defined as below:
...ANSWER
Answered 2021-Apr-28 at 13:00You should get the neural network out of DataParallel
first.
Assuming your DataParallel
is named model
you could do:
QUESTION
I am trying to quantize MobileFacenet (code from sirius-ai) according to the suggestion and I think I met the same issue as this one
When I add tf.contrib.quantize.create_training_graph()
into training graph
(train_nets.py ln.187: before train_op = train(...)
or in train()
utils/common.py ln.38 before gradients)
It did not add quantize-aware ops into the graph to collect dynamic range max\min.
I assume that I should see some additional nodes in tensorboard, but I did not, thus I think I did not successfully add quantize-aware ops in training graph. And I try to trace tensorflow, found that I got nothing with _FindLayersToQuantize().
However when I add tf.contrib.quantize.create_eval_graph()
to refine the training graph. I can see some quantize-aware ops as act_quant...
Since I did not add ops in training graph successfully, I have no weights to load in eval graph.
Thus I got some error message as
ANSWER
Answered 2020-Aug-10 at 19:10H,
Unfortunately, the contrib/quantize tool is now deprecated. It won't be able to support newer models, and we are not working on it anymore.
If you are interested in QAT, I would recommend trying the new TF/Keras QAT API. We are actively developing that and providing support for it.
QUESTION
I am trying to find a solution to run face recognition on AI camera. And found that MobileFacenet (code from sirius-ai) is great as a light model!
I succeed to convert to TFLITE with F32 format with good accuracy. However when I failed when quantized to uint8 with the following command:
...ANSWER
Answered 2020-Jul-13 at 07:24Using tflite_convert
requires either --saved_model_dir
or --keras_model_file
to be defined. When using TF2.x, you should use --enable_v1_converter
if you want to convert to quantized tflite from frozen graph.
EDIT:
What you are currently doing is called "dummy quantization", which can be used to test the inference timings of the quantized network. To properly quantize the network, min/max information of layers should be injected into it with fake quant nodes.
Please see this gist for example codes on how to do it. This blog post also has some information on quantization aware training.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install MobileFaceNet
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page