learning-algorithm | 学习数据结构和算法
kandi X-RAY | learning-algorithm Summary
kandi X-RAY | learning-algorithm Summary
学习数据结构和算法
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of learning-algorithm
learning-algorithm Key Features
learning-algorithm Examples and Code Snippets
Community Discussions
Trending Discussions on learning-algorithm
QUESTION
I am running k-fold repeated training with the caret package and would like to calculate the confidence interval for my accuracy metrics. This tutorial prints a caret training object that shows accuracy/kappa metrics and associated SD: https://machinelearningmastery.com/tune-machine-learning-algorithms-in-r/. However, when I do this, all that is listed are the metric average values.
...ANSWER
Answered 2021-Mar-01 at 07:44It looks like it is stored in the results variable of the resultant object.
QUESTION
I am currently assigned to making a web scraper that pulls links. I can successfully pull this data:
...ANSWER
Answered 2020-Apr-05 at 04:40You have to check if the link indeed has the "href"
attribute:
QUESTION
Suppose I have one or multiple tiles consisting of a single pattern (e.g. materials like: wood, concrete, gravel...) that I would like to train my classifier on, and then I'll use the trained classifier to determine to which class each pixel in another image belong.
Below are example of two tiles I would like to train the classifier on:
And let's say I want to segment the image below to identify the pixels belonging to the door and those belonging to the wall. It's just an example, I know this image isn't made of exactly the same patterns as the tiles above:
For this specific problem, is it necessary to use convolutional neural networks? Or is there a way to achieve my goal with a shallow neural network or any other classifier, combined with texture features for example?
I've already implemented a classifier with Scikit-learn which works on tile pixels individually (see code below where training_data
is a vector of singletons), but I want instead to train the classifier on texture patterns.
ANSWER
Answered 2019-Nov-02 at 19:13You can use U-Net
or SegNet
for image segmentation. In fact you add residual layers to your CNN to get this result:
About U-Net:
Arxiv: U-Net: Convolutional Networks for Biomedical Image Segmentation
Seg-Net:
Arxiv: SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation
Here are Simple Examples of Codes: keras==1.1.0
U-Net:
QUESTION
While building a new neural network I seem unable to split the data. For some unknown reason it wont import train.test.split
...ImportError: cannot import name 'cross_validation'
ANSWER
Answered 2018-Oct-11 at 09:23The module has been removed since 0.20.
Deprecated since version 0.18: This module will be removed in 0.20. Use sklearn.model_selection.cross_val_score instead.
QUESTION
What I need is to:
- Apply a logistic regression classifier
- Report the per-class ROC using the AUC.
- Use the estimated probabilities of the logistic regression to guide the construction of the ROC.
- 5fold cross validation for the training your model.
For this, my approach was to use this really nice tutorial:
From his idea and method I simply changed how I obtain the raw data which I am getting like this:
...ANSWER
Answered 2019-May-03 at 11:17The iris dataset is usually ordered with respect to classes. Hence, when you split without shuffling, the test dataset might get only one class.
One simple solution would be using shuffle
parameter.
QUESTION
I tried to follow the example codes at https://machinelearningmastery.com/tune-machine-learning-algorithms-in-r/ but my output did not showing up accuracy and kappa sd. What am i missing? My caret library is 3.5.2 on Windows 10 Pro.
My output was:
...ANSWER
Answered 2019-Jan-29 at 12:06In the tutorial it's not specified how the output with SD's was obtained. It actually wasn't just rf_default
. Instead,
QUESTION
I am looking into the time complexities of Machine Learning Algorithms and I cannot find what is the time complexity of Logistic Regression for predicting a new input. I have read that for Classification is O(c*d) c-beeing the number of classes, d-beeing the number of dimensions and I know that for the Linear Regression the search/prediction time complexity is O(d). Could you maybe explain what is the search/predict time complexity of Logistic Regression? Thank you in advance
Example For The other Machine Learning Problems: https://www.thekerneltrip.com/machine/learning/computational-complexity-learning-algorithms/
...ANSWER
Answered 2019-Jan-17 at 16:19- f - number of features (+1 because of bias). Multiplication of each feature times it's weight (
f
operations,+1
for bias). Anotherf + 1
operations for summing all of them (obtaining prediction). Using gradient method to improve weights counts for the same number of operations, so in total we get 4* (f+1) (two for forward pass, two for backward), which is simply O(f+1). - c - number of classes (possible outputs) in your logistic regression. For binary classification it's one, so this term cancels out. Each class has it's corresponding set of weights.
- s - number of samples in your dataset, this one is quite intuitive I think.
- E - number of epochs you are willing to run the gradient descent (whole passes through dataset)
Note: this complexity can change based on things like regularization (another c operations), but the idea standing behind it goes like this.
Complexity of predictions for one sample: O((f+1)c)- f + 1 - you simply multiply each weight by the value of feature, add bias and sum all of it together in the end.
- c - you do it for every class, 1 for binary predictions.
- (f+1)c - see complexity for one sample
- s - number of samples
For multiclass logistic regression it will be softmax, while linear regression, as the name suggests, has linear activation (effectively no activation). It does not change the complexity using big O notation, but it's another c*f operations during the training (didn't want to clutter the picture further) multiplied by 2 for backprop.
QUESTION
1) According to http://docs.aws.amazon.com/machine-learning/latest/dg/learning-algorithm.html Amazon ML uses SGD. However I can't find how many hidden layers are used in the neural network?
2) Can someone confirm that SageMaker would be able to do what Amazon ML does? i.e. SageMaker is more powerful than Amazon ML?
...ANSWER
Answered 2017-Dec-06 at 15:55I'm not sure about Amazon ML but SageMaker uses the docker containers listed here for the built-in training: https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-algo-docker-registry-paths.html
So, in general, anything you can do with Amazon ML you should be able to do with SageMaker (although Amazon ML has a pretty sweet schema editor).
You can check out each of those containers to dive deep on how it all works.
You can find an exhaustive list of available algorithms in SageMaker here: https://docs.aws.amazon.com/sagemaker/latest/dg/algos.html
For now, as of December 2017, these algorithms are all available:
- Linear Learner
- Factorization Machines
- XGBoost Algorithm
- Image Classification Algorithm
- Amazon SageMaker Sequence2Sequence
- K-Means Algorithm
- Principal Component Analysis (PCA)
- Latent Dirichlet Allocation (LDA)
- Neural Topic Model (NTM)
The general SageMaker SDK interface to these algorithms looks something like this:
QUESTION
I'm learning Random Forest. For learning purpose I'm using following link random Forest. I'm trying to run the code given in this link using my R-3.4.1. But while running the following code for missing value treatment
...ANSWER
Answered 2017-Sep-14 at 15:06The key mistake (among many mistakes) in that code was that there is no data
parameter. The parameter name is obj
. When I change that the example code runs.
You also need to set on=
or setkey
given that the object is a data.table
, or simply change it to a data.frame for the imputation step:
QUESTION
I want to extract both memorability score and memorability heat maps by using the available memnet caffemodel by Khosla et al. at link Looking at the prototxt model, I can understand that the final inner-product output should be the memorability score, but how should I obtain the memorability map for a given input image? Here some examples.
Thanks in advance
...ANSWER
Answered 2017-Jan-23 at 17:36As described in their paper [1], the CNN (MemNet) outputs a single, real-valued output for the memorability. So, the network they made publicly available, calculates this single memorability score, given an input image - and not a heatmap.
In section 5 of the paper, they describe how to use this trained CNN to predict a memorability heatmap:
To generate memorability maps, we simply scale up the image and apply MemNet to overlapping regions of the image. We do this for multiple scales of the image and average the resulting memorability maps.
Let's consider the two important steps here:
Problem 1: Make the CNN work with any input size.To make the CNN work on images of any arbitrary size, they use the method presented in [2]. While convolutional layers can be applied to images of arbitrary size - resulting in smaller or larger outputs - the inner product layers have a fixed input and output size. To make an inner product layer work with any input size, you apply it just like a convolutional kernel. For an FC layer with 4096 outputs, you interpret it as a 1x1 convolution with 4096 feature maps.
To do that in Caffe, you can directly follow the Net Surgery tutorial. You create a new .prototxt
file, where you replace the InnerProduct
layers with Convolution
layers. Now, Caffe won't recognize the weights in the .caffemodel
anymore, as the layer types don't match anymore. So, you load the old net and its parameters into Python, load the new net, and assign the old parameters to the new net and save it as a new .caffemodel
file.
Now, we can run images of any dimensions (larger or equal than 227x227) through the network.
Problem 2: Generate the heat mapAs explained in the paper [1], you apply this fully-convolutional network from Problem 1 to the same image at different scales. The MemNet is a re-trained AlexNet, so the default input dimension is 227x227. They mention that a 451x451 input gives a 8x8 output, which implies a stride of 28 for applying the layers. So a simple example could be:
- Scale 1: 227x227 → 1x1. (I guess they definitely use this scale.)
- Scale 2: 283x283 → 2x2. (Wild guess)
- Scale 3: 339x339 → 4x4. (Wild guess)
- Scale 4: 451x451 → 8x8. (This scale is mentioned in the paper.)
The results will look like this:
So, you'll just average these outputs to get your final 8x8 heatmap. From the image above, it should be clear how to average the different-scale outputs: you'll have to upsample the low-res ones to 8x8, and average then.
From the paper, I assume that they use very high-res scales, so their heatmap will be around the same size as the image initially was. They write that it takes 1s on a "normal" GPU. This is a quite long time, which also indicates that they probably upsample the input images quite to quite high dimensions.
Bibliography:[1]: A. Khosla, A. S. Raju, A. Torralba, and A. Oliva, "Understanding and Predicting Image Memorability at a Large Scale", in: ICCV, 2015. [PDF]
[2]: J. Long, E. Shelhamer, and T. Darrell, "Fully convolutional networks for semantic segmentation", in: CVPR, 2015. [PDF]
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install learning-algorithm
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page