DeepLearning | For Deeplearning Study | Machine Learning library
kandi X-RAY | DeepLearning Summary
kandi X-RAY | DeepLearning Summary
For Deeplearning Study
Support
Quality
Security
License
Reuse
Top functions reviewed by kandi - BETA
Currently covering the most popular Java, JavaScript and Python libraries. See a Sample of DeepLearning
DeepLearning Key Features
DeepLearning Examples and Code Snippets
Community Discussions
Trending Discussions on DeepLearning
QUESTION
When I run the program below, it gives me an error. The problem seems to be in the loss function but I can't find it. I have read the Pytorch Documentation for nn.CrossEntropyLoss but still can't find the problem.
Image size is (1 x 256 x 256), Batch size is 1
I am new to PyTorch, thanks.
...ANSWER
Answered 2021-Jun-05 at 03:06Try
loss = compute_loss(y_hat, torch.tensor([0]))
QUESTION
I am trying to tune the hyperparameters in mlr
using the tuneParams
function. However, I can't make sense of the results it is giving me (or else Im using it incorrectly).
For example, if I create some data with a binary response and then create an mlr
h2o
classification model and then check the accuracy and AUC I will get some values.
Then, if I use tuneParams
on some parameters and find a better accuracy and AUC and then plug them into my model. The resulting accuracy and AUC (for the model) does not match that found by using tuneParams
.
Hopefully the code below will illustrate my issue:
...ANSWER
Answered 2021-May-27 at 15:33You're getting different results because you're evaluating the learner using different train and test data. If I use the same 3-fold CV, I get the same results:
QUESTION
I'm having an issue building a docker image from a dockerfile that used to work:
(My dockerfile has more steps, but this is enough to reproduce)
...ANSWER
Answered 2021-May-20 at 14:13This is a known issue. Read this for more info.
You can first add the correct repository GPG key using the following command.
QUESTION
The following error(s) and solution go for deploying a stack through YAML in portainer but they can surely be applied to docker otherwise.
Environment:
...ANSWER
Answered 2021-Apr-13 at 05:55It seems that by default, the size of the shared memory is limited to 64mb. The solution to this error therefore, as shown in this issue is to increase the size of shared memory.
Hence, the first idea that comes to mind would be simply defining something like shm_size: 9gb
in the YAML file of the stack. However, this might not work as shown for e.g in this issue.
Therefore, in the end, I had to use the following workaround (also described here, but poorly documented):
QUESTION
I'm using MATLAB to predict a trend with a machine learning approach.
My data file is an .xlsx file containing a timeline in one column (various sampling timestamps, i.e. numbers that represents seconds), and in the other columns I have some integers representing my trend.
My .xlsx file is pretty much like this:
...ANSWER
Answered 2021-Apr-03 at 20:46I would distinguish the forecasting problem from the data sampling time problem. You are dealing substantially with missing data.
Forecasting problem: You may use any machine learning technique just ignoring missing data. If you are not familiar with machine learning, I would suggest you to use LASSO (least absolute shrinkage and selection operator), which has been demonstrated to have predicting power (see "Sparse Signals in the Cross-Section of Returns" by ALEX CHINCO, ADAM D. CLARK-JOSEPH, and MAO YE).
Missing imputation problem: In the first place you should consider the reason why you have missing data. Sometime it makes no sense to impute values because the information that the value is missing is itself important and should not be overridden. Otherwise you have multiple options, other than linear interpolation, to estimate the missing values. For example check the MATLAB function
fillmissing
.
QUESTION
I just begin to learn Pytorch and create my first CNN. The dataset contains 3360 RGB images and I converted them to a [3360, 3, 224, 224]
tensor. The data and label are in the dataset(torch.utils.data.TensorDataset)
. Below is the training code.
ANSWER
Answered 2021-Apr-03 at 14:34that error is actually refering to the weights of the conv layer which are in float32
by default when the matrix multiplication is called. Since your input is double
(float64
in pytorch) while the weights in conv are float
So the solution in your case is :
QUESTION
My docker run is failing because git complains that I didnt set a user config which I never needed for my older images.
...ANSWER
Answered 2021-Mar-11 at 11:15I didn't find why the error occured but I found a solution to remove it. Instead of cloning master then pulling the branch, I directly clone the branch I want to use.
The cloning line is now :
QUESTION
Can GCP VM's run while I am offline? I am using a GCP Deeplearning notebook VM with GPU to train a neural network. When I close the Jupyter notebook tab, the code stops executing while the instance is still alive and I get billed. Is there a way to run the code while I am offline? I think this must be possible.
...ANSWER
Answered 2021-Mar-07 at 00:24Thanks to everybody who commented on this question.
You can run python scripts in GCP Deeplearning notebook VM in the background through nohup
.
QUESTION
I am using the files from a video tutorial. At the beginning, it starts to spread the files of input image data by copying them in various folders. The code works in the tutorial but I wonder why I get the following error:
[Errno 22] Invalid argument: 'D:\Machine Learning\Deep Learning\SRU-deeplearning-workshop-master\catdogKaggle\train\cat.1.jpg'
Here is the code. At first it creates the directories.(The catdogKaggle\train contains the input images):
...ANSWER
Answered 2021-Mar-03 at 16:29You are on Windows which is why you need to escape the backslashes or use raw strings to store file paths, i.e.:
QUESTION
I am using the H2O R package.
My understanding is, that this package requires you to have an internet connection as well as connect to the the h2o servers? If you use the h2o package run machine learning models on your data, does h2o "see" your data? I turned off my wifi and tried running some machine learning models using h2o :
...ANSWER
Answered 2021-Feb-21 at 09:35From the documentation of h2o.init()
(emphasis mine):
This method first checks if H2O is connectible. If it cannot connect and
startH2O = TRUE
with IP of localhost, it will attempt to start an instance of H2O with IP = localhost, port = 54321. Otherwise, it stops immediately with an error. When initializing H2O locally, this method searches forh2o.jar
in the R library resources [...], and if the file does not exist, it will automatically attempt to download the correct version from Amazon S3. The user must have Internet access for this process to be successful. Once connected, the method checks to see if the local H2O R package version matches the version of H2O running on the server. If there is a mismatch and the user indicates she wishes to upgrade, it will remove the local H2O R package and download/install the H2O R package from the server.
So, h2o.init()
with the default setting ip = "127.0.0.1"
, as here, connects the R session with the H2O instance (sometimes referred to as "server") in your local machine. If all the necessary package files are in place and up to date, no internet connection is necessary; the package will attempt to connect to the internet only to download stuff in case something is not present or up to date. No data is uploaded anywhere.
Community Discussions, Code Snippets contain sources that include Stack Exchange Network
Vulnerabilities
No vulnerabilities reported
Install DeepLearning
Support
Reuse Trending Solutions
Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
Find more librariesStay Updated
Subscribe to our newsletter for trending solutions and developer bootcamps
Share this Page