caffe-c | c to use your caffemodel | Machine Learning library
kandi X-RAY | caffe-c Summary
kandi X-RAY | caffe-c Summary
c++ to use your caffemodel
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of caffe-c
caffe-c Key Features
caffe-c Examples and Code Snippets
Community Discussions
Trending Discussions on caffe-c
QUESTION
I'm trying to use Caffe in my C++ project which I compile with CMakeLists.txt, but it doesn't want to work. My only line in the code is
...ANSWER
Answered 2021-Jan-08 at 20:26_DIR
should not be set manually in CMake code usually. There are better alternatives that should be used as setting these variable won't necessarily do what you want. It won't change where find_package
finds its libraries.
The CaffeConfig.cmake
file is generated when building Caffe. You should never download another one, these files are compatible only with a specific build configuration.
The Caffe library supports to be used with CMake, so FindCaffe.cmake
is unnecessary.
For find_package
to work, either set the _ROOT
variable (require CMake 3.12 minimum) or you must append the install path in CMAKE_PREFIX_PATH
. Here's a CMake example that uses the prefix path:
QUESTION
I want to load a Neural Network that has been trained with caffe for image classification.
The NN contains a file mean.binaryproto
which has the means to be subtracted before inputting an image to be classified.
I am trying to understand what is contained in this file so I used Google Colab to see what is inside it.
The code to load it is the following:
...ANSWER
Answered 2018-Nov-07 at 21:25However I was expecting a single value per channel instead I found a 256x256 array: does it mean that the took a mean on each pixel of each channel?
Exactly. According to the shape of mean.binaryproto
, this file is the average image of some dataset, which means that it took the mean of each pixel (feature) for each channel.
This should not be confused with the mean pixel, which, as you stated, is a single value for each channel.
For example, mean pixel was adoped by Very Deep Convolutional Networks for Large-Scale Image Recognition. According to their paper:
The only pre-processing we do is subtracting the mean RGB value, computed on the training set, from each pixel
In other words, if you consider an RGB image to be 3 feature arrays of size N x N, the average image will be the mean of each feature and the mean pixel will be the mean of all features.
Another question is the following: I want to use such NN with OpenCV which instead of RGB uses BGR: How to know if the mean 3x256x256 uses RGB or BGR?
I doubt the binary file you are reading stores any information about its color format, but a practical way to figure out is to plot this image using matplotlib
and see if the colors make sense.
For example, face images. If red and blue channels are swapped the skin tone will look blueish.
In fact, the image above is an example of average image (face images) :)
You could also assume it is BGR since OpenCV uses this color format.
However, the correct way to find out how this mean.binaryproto
was generated is by looking at their repositories or by asking the owner of the model.
QUESTION
I installed Caffe-cpu on my Ubuntu 18.04 via the apt-get command, as it instructs on their official website:
...ANSWER
Answered 2019-Sep-18 at 19:38sudo apt install libcaffe-cpu-dev
QUESTION
I'm getting caffe import error even after installing it successfully using the command sudo apt install caffe-cpu
. I was able to find caffe file at /usr/lib/python3/dist-packages/caffe
(Path was added to PYTHONPATH). All requirements mentioned in the requirements.txt file of caffe directory was also installed.
I'm using Ubuntu 18.04 LTS, Python3.
Could anyone help me with this error?
...ANSWER
Answered 2019-Feb-10 at 10:00Problem solved: The error came up because caffe build wasn't done successfully. I recommend not to go up with the sudo apt install caffe-cpu
command (Which is mentioned in the official caffe installation guide for Ubuntu); because it will end up in the error as above. It's better to install from the source.
Let me give step by step guidance to install caffe successfully in Ubuntu 18.04 LTS:
1] sudo apt-get install -y --no-install-recommends libboost-all-dev
2] sudo apt-get install libprotobuf-dev libleveldb-dev libsnappy-dev libopencv-dev libboost-all-dev libhdf5-serial-dev \ libgflags-dev libgoogle-glog-dev liblmdb-dev protobuf-compiler
3] git clone https://github.com/BVLC/caffe
cd caffe
cp Makefile.config.example Makefile.config
4] sudo pip install scikit-image protobuf
cd python
for req in $(cat requirements.txt); do sudo pip install $req; done
5] Modify the Makefile.config file:
Uncomment the line CPU_ONLY := 1
, and the line OPENCV_VERSION := 3
.
6] Find LIBRARIES
line in Makefile and change it to as follows:
LIBRARIES += glog gflags protobuf boost_system boost_filesystem m hdf5_hl hdf5 \
opencv_core opencv_highgui opencv_imgproc opencv_imgcodecs
7] make all
Now you could get some error like this:
CXX src/caffe/net.cpp
src/caffe/net.cpp:8:18: fatal error: hdf5.h: No such file or directory
compilation terminated.
Makefile:575: recipe for target '.build_release/src/caffe/net.o' failed
make: *** [.build_release/src/caffe/net.o] Error 1
To solve this error follow step 8.
8] install libhdf5-dev
open Makefile.config, locate line containing LIBRARY_DIRS
and append /usr/lib /x86_64-linux-gnu/hdf5/serial
locate INCLUDE_DIRS
and append /usr/include/hdf5/serial/
(per this SO answer)
rerun make all
9] make test
10] make runtest
11] make pycaffe
Now you could get some error like this:
CXX/LD -o python/caffe/_caffe.so python/caffe/_caffe.cpp
python/caffe/_caffe.cpp:10:31: fatal error: numpy/arrayobject.h: No such file or directory
compilation terminated.
Makefile:501: recipe for target 'python/caffe/_caffe.so' failed
make: *** [python/caffe/_caffe.so] Error 1
To solve this error follow step 12.
12] Find PYTHON_INCLUDE
line in Makefile.config and do the changes as follows:
QUESTION
I'm installing caffe-cpu and anaconda on Ubuntu 18.04 LTS version.
Anyway, I success to install Anaconda on my system, but I'm getting in trouble to install caffe.
I found many pages such as youtube, but it isn't helpful, so i read many times to official installation manual page (I think here is the official page). In this page,
...ANSWER
Answered 2019-Feb-01 at 16:58I was able to get it working following these steps,
Get caffe source form here (https://github.com/BVLC/caffe.git)
Install CUDA if you need GPS support (https://developer.nvidia.com/cuda-downloads?target_os=Linux&target_arch=x86_64&target_distro=Ubuntu&target_version=1804)
Install CUDNN, if you need GPS support (https://developer.nvidia.com/rdp/cudnn-download)
Replace existing Makefile.config with this (https://gist.github.com/GPrathap/1f9d184c55779509860b8bf92cea416d) Here I have configured for cuda 9.2. If you have a different version please search 9.2 and change the version which you have instaled. Also, please recheck all the paths which are declared in Makefile.config.
You may type
make all
followed bymake test
make distribute
for creating the final lib, an include directory of caffe which can be found in (caffe/distribute).If you are using CMake type project add where to find caffe as bellow,
QUESTION
Ubuntu 18.04
Python 2.7
My issue is I am unable to import caffe module in python even though I have installed it. I believe it is a path / env variable issue.
...ANSWER
Answered 2019-Mar-11 at 23:47When you install caffe on ubuntu using sudo apt install caffe-cpu
, it compiles the bindings for python 3 only (_caffe.cpython-36m-x86_64-linux-gnu.so
), which is located at /usr/lib/python3/dist-packages/caffe/
. So the short answer is to use python 3 instead.
The long answer is to compile caffe with python 2 bindings from source.
QUESTION
Mean average precision computed at k (for top-k elements in the answer), according to wiki, ml metrics at kaggle, and this answer: Confusion about (Mean) Average Precision should be computed as mean of average precisions at k, where average precision at k is computed as:
Where: P(i) is the precision at cut-off i in the list; rel(i) is an indicator function equaling 1 if the item at rank i is a relevant document, zero otherwise.
The divider min(k, number of relevant documents)
has the meaning of maximum possible number of relevant entries in the answer.
Is this understanding correct?
Is MAP@k always less than MAP computed for all ranked list?
My concern is that, this is not how MAP@k is computed in many works.
It is typical, that the divider is not min(k, number of relevant documents)
, but the number of relative documents in the top-k. This approach will give higher value of MAP@k.
HashNet: Deep Learning to Hash by Continuation" (ICCV 2017)
Code: https://github.com/thuml/HashNet/blob/master/pytorch/src/test.py#L42-L51
...ANSWER
Answered 2019-Mar-03 at 12:08You are completely right and well done for finding this. Given the similarity of code, my guess is there is one source bug, and then papers after papers copied the bad implementation without examining it closely.
The "akturtle" issue raiser is completely right too, I was going to give the same example. I'm not sure if "kunhe" understood the argument, of course recall matters when computing average precision.
Yes, the bug should inflate the numbers. I just hope that the ranking lists are long enough and that the methods are reasonable enough such that they achieve 100% recall in the ranked list, in which case the bug would not affect the results.
Unfortunately it's hard for reviewers to catch this as typically one doesn't review code of papers.. It's worth contacting authors to try to make them update the code, update their papers with correct numbers, or at least don't continue making the mistake in their future works. If you are planning to write a paper comparing different methods, you could point out the problem and report the correct numbers (as well as potentially the ones with the bug just to make apples for apples comparisons).
To answer your side-question:
Is MAP@k always less than MAP computed for all ranked list?
Not necessarily, MAP@k is essentially computing the MAP while normalizing for the potential case where you can't do any better given just k retrievals. E.g. consider returned ranked list with relevances: 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 and assume there are in total 6 relevant documents. MAP should be slightly higher than 50% here, while MAP@3 = 100% because you can't do any better than retrieving 1 1 1. But this is unrelated to the bug you discovered as with their bug the MAP@k is guaranteed to be at least as large as the true MAP@k.
QUESTION
I'm trying to run a Jupyter notebook that uses Caffe. Caffe is not included in datalab. I am trying to install that library from within the Jupyter notebook (as recommended in the datalab docs), but am running into problems.
I am new to datalab, and a novice with such things generally. Any advice would be very much appreciated.
The datalab documentation suggests 3 strategies for adding a python library that is not already included. I am concentrating on the first two of these strategies.
The platform for my datacloud instance is:
platform.platform() 'Linux-4.4.111+-x86_64-with-debian-stretch-sid'
Below I'll list various things I've tried and the error messages I got. For the first strategy, I tried these things in a cell of the same notebook.
(Attempt 1)
...ANSWER
Answered 2019-Jan-09 at 18:13I tried on my end to install caffe-cpu, and it seems that the file /etc/apt/sources.list
doesn't have the needed repositories to install it, in the datalab instance. To workaround this issue, I used the following commands, in a created notebook:
QUESTION
I have trained a model with caffe tools under bin and now I am trying to do testing using python script, I read in an image and preprocess it myself (as I did for my training dataset) and I load the pretrained weights to the net, but I am almost always (99.99% of the time) receiving the same result -0- for every test image. I did consider that my model might be overfitting but after training a few models, I have come to realize the labels I get from predictions are most likely the cause. I have also increased dropout and took random crops to overcome overfitting and I have about 60K for training. The dataset is also roughly balanced. I get between 77 to 87 accuracy during evaluation step of training (depending on how I process data, what architecture I use etc)
Excuse my super hacky code, I have been distant to caffe testing for some time so I suspect the problem is how I pass the input data to the network, but I can't put my finger on it:
...ANSWER
Answered 2018-Oct-09 at 13:32I have fixed this problem eventually. I am not 100% sure what worked but it was most likely changing the bias to 0 while learning.
QUESTION
I have trained a googlenet on Caffe and now I want to do testing, so I use a deploy.prototxt and the pretrained weights and assign them to Net. But I receive this error (interestingly after a message that says network is initialized)
...ANSWER
Answered 2018-Sep-29 at 22:55If anyone has been wondering, it turns out I have trained the model with a different version of caffe and was trying to test with another. I have two versions installed on my computer and it seems I was simply importing the older one during testing with python script (for training I had directly referenced and used the caffe tools under build) that is defined in LD_LIBRARY_PATH. The difference between versions is not too dramatic, but it seems there was a mismatch while reading prototoxt.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install caffe-c
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page