Deep-Learning-Papers-Reading-Roadmap | Deep Learning papers reading roadmap for anyone | Machine Learning library

 by   floodsung Python Version: Current License: No License

kandi X-RAY | Deep-Learning-Papers-Reading-Roadmap Summary

Deep-Learning-Papers-Reading-Roadmap is a Python library typically used in Institutions, Learning, Education, Artificial Intelligence, Machine Learning, Deep Learning, Pytorch, Tensorflow applications. Deep-Learning-Papers-Reading-Roadmap has no bugs, it has no vulnerabilities, it has build file available and it has medium support. You can download it from GitHub.
If you are a newcomer to the Deep Learning area, the first question you may have is "Which paper should I start reading from?".
    Support
      Quality
        Security
          License
            Reuse
            Support
              Quality
                Security
                  License
                    Reuse

                      kandi-support Support

                        summary
                        Deep-Learning-Papers-Reading-Roadmap has a medium active ecosystem.
                        summary
                        It has 34915 star(s) with 7201 fork(s). There are 2135 watchers for this library.
                        summary
                        It had no major release in the last 6 months.
                        summary
                        There are 49 open issues and 4 have been closed. On average issues are closed in 0 days. There are 43 open pull requests and 0 closed requests.
                        summary
                        It has a neutral sentiment in the developer community.
                        summary
                        The latest version of Deep-Learning-Papers-Reading-Roadmap is current.
                        This Library - Support
                          Best in #Machine Learning
                            Average in #Machine Learning
                            This Library - Support
                              Best in #Machine Learning
                                Average in #Machine Learning

                                  kandi-Quality Quality

                                    summary
                                    Deep-Learning-Papers-Reading-Roadmap has 0 bugs and 0 code smells.
                                    This Library - Quality
                                      Best in #Machine Learning
                                        Average in #Machine Learning
                                        This Library - Quality
                                          Best in #Machine Learning
                                            Average in #Machine Learning

                                              kandi-Security Security

                                                summary
                                                Deep-Learning-Papers-Reading-Roadmap has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
                                                summary
                                                Deep-Learning-Papers-Reading-Roadmap code analysis shows 0 unresolved vulnerabilities.
                                                summary
                                                There are 0 security hotspots that need review.
                                                This Library - Security
                                                  Best in #Machine Learning
                                                    Average in #Machine Learning
                                                    This Library - Security
                                                      Best in #Machine Learning
                                                        Average in #Machine Learning

                                                          kandi-License License

                                                            summary
                                                            Deep-Learning-Papers-Reading-Roadmap does not have a standard license declared.
                                                            summary
                                                            Check the repository for any license declaration and review the terms closely.
                                                            summary
                                                            Without a license, all rights are reserved, and you cannot use the library in your applications.
                                                            This Library - License
                                                              Best in #Machine Learning
                                                                Average in #Machine Learning
                                                                This Library - License
                                                                  Best in #Machine Learning
                                                                    Average in #Machine Learning

                                                                      kandi-Reuse Reuse

                                                                        summary
                                                                        Deep-Learning-Papers-Reading-Roadmap releases are not available. You will need to build from source code and install.
                                                                        summary
                                                                        Build file is available. You can build the component from source.
                                                                        This Library - Reuse
                                                                          Best in #Machine Learning
                                                                            Average in #Machine Learning
                                                                            This Library - Reuse
                                                                              Best in #Machine Learning
                                                                                Average in #Machine Learning
                                                                                  Top functions reviewed by kandi - BETA
                                                                                  kandi has reviewed Deep-Learning-Papers-Reading-Roadmap and discovered the below as its top functions. This is intended to give you an instant insight into Deep-Learning-Papers-Reading-Roadmap implemented functionality, and help decide if they suit your requirements.
                                                                                  • Download PDF .
                                                                                    • Clean PDF link .
                                                                                      • Shorten a title .
                                                                                        • Determine file extension .
                                                                                          • Clean text .
                                                                                            • Print title .
                                                                                              Get all kandi verified functions for this library.
                                                                                              Get all kandi verified functions for this library.

                                                                                              Deep-Learning-Papers-Reading-Roadmap Key Features

                                                                                              From outline to detail
                                                                                              From old to state-of-the-art
                                                                                              from generic to specific areas
                                                                                              focus on state-of-the-art

                                                                                              Deep-Learning-Papers-Reading-Roadmap Examples and Code Snippets

                                                                                              covfefe: A Deep Learning Wrapper for deep learning Frameworks,Overview
                                                                                              Pythondot imgLines of Code : 47dot imgLicense : Permissive (MIT)
                                                                                              copy iconCopy
                                                                                              
                                                                                                                                  # Simple model that demonstrates the simplified API (very similar interface as keras) # But supports more frameworks as a backend and is very transparent # No allocations of additional and unnecessary memory, # no unnecessarily complicated # pre and post processings such as gradient clipping, # unintentional internal learning rate decay # More importantly, exposes the framework details # like lasagne by allowing # training, validation, update, testing functions # as parameters to the main trainin and predict loops n_f = 32 ch = 1 row = col = 28 n_conv = 3 n_dense = 128 n_classes = 10 input = Input(input_shape=(ch,row,col), data_source='mnist.lmdb', batch_size=64) # Note: there's no need to specify 1D, 2D, etc # in the layers as that'd be inferred from # the input data shape that is specified # in the input layer above conv_1 = Convolution(ch, n_conv, n_conv, border_mode='same', activation='relu')(input) conv_2 = Convolution(n_f, n_conv, n_conv, border_mode='same', activation='relu', subsample=(2, 2))(conv_1) conv_3 = Convolution(n_f*2, n_conv, n_conv, border_mode='same', activation='relu', subsample=(2, 2))(conv_2) conv_4 = Convolution(n_f*4, n_conv, n_conv, border_mode='same', activation='relu', subsample=(2, 2))(conv_3) flat = Flatten()(conv_4) d_1 = Dense(n_dense, activation='relu')(flat) d_2 = Dense(n_dense/2, activation='relu')(d_1) o_1 = Dense(n_classes, activation='softmax')(d_2) model = Model(inputs=[input], outputs=[o_1]) model.compile(losses=['categorical_crossentropy'], optimizers=['SGD'], loss_weights=[1.0]) model.fit(X, Y, train_func='', val_func='') # Here the internal framework could be exposed
                                                                                              Deep-learning experiments
                                                                                              Javadot imgLines of Code : 46dot imgno licencesLicense : No License
                                                                                              copy iconCopy
                                                                                              
                                                                                                                                  $ cd code/argumentation-convincingness-experiments-python $ virtualenv env New python executable in env/bin/python Installing setuptools, pip...done. $ source env/bin/activate (env)user@x:~/acl2016-convincing-arguments/code/argumentation-convincingness-experiments-python$
                                                                                              $ python env/bin/pip install -r requirements.txt Downloading/unpacking git+git://github.com/Theano/Theano.git@4e7f550 (from -r requirements.txt (line 4)) ... (lots of Fortran and C warnings because of SciPy and NumPy) ...
                                                                                              $ THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32,optimizer_including=cudnn \ python bidirectional_lstm.py ../../data/UKPConvArg1Strict-CSV/
                                                                                              Using Theano backend. Using gpu device 0: GRID K520 (CNMeM is disabled, CuDNN 4007) /home/ubuntu/devel/acl2016-submission/env/local/lib/python2.7/site-packages/theano/tensor/signal/downsample.py:5: UserWarning: downsample module has been moved to the pool module. warnings.warn("downsample module has been moved to the pool module.") Loading data... Loaded 32 files Fold name which-type-of-endeavor-is-better-a-personal-pursuit-or-advancing-the-common-good-_personal-pursuit.csv 11296 train sequences 354 test sequences Pad sequences (samples x time) X_train shape: (11296, 300) X_test shape: (354, 300) Build model... Train... Epoch 1/5 11296/11296 [==============================] - 142s - loss: 0.5745 Epoch 2/5 11296/11296 [==============================] - 142s - loss: 0.3129 Epoch 3/5 11296/11296 [==============================] - 142s - loss: 0.2240 Epoch 4/5 11296/11296 [==============================] - 142s - loss: 0.1708 Epoch 5/5 11296/11296 [==============================] - 142s - loss: 0.1386 Prediction Test accuracy: 0.683615819209 Wrong predictions: ['arg33053_arg33125', 'arg33070_arg33121', 'arg33101_arg33115', ... Fold name gay-marriage-right-or-wrong_allowing-gay-marriage-is-right.csv 11246 train sequences 404 test sequences ...
                                                                                              $ THEANO_FLAGS=mode=FAST_RUN,device=gpu,floatX=float32,optimizer_including=cudnn \ python bidirectional_lstm_regression.py ../../data/UKPConvArg1-Ranking-CSV/
                                                                                              deep-learning,Usage
                                                                                              Pythondot imgLines of Code : 18dot imgLicense : Permissive (MIT)
                                                                                              copy iconCopy
                                                                                              
                                                                                                                                  # Start a TensorFlow session sess = tf.Session() # Initialize an unconfigured autoencoder with specified dimensions, etc. sda = SDAutoencoder(dims=[784, 256, 64, 32], activations=["sigmoid", "tanh", "sigmoid"], sess=sess, noise=0.1, loss="rmse") # Pretrain weights and biases of each layer in the network. sda.pretrain_network(X_TRAIN_PATH) # Read in test y-values to softmax classifier. sda.finetune_parameters(X_TRAIN_PATH, Y_TRAIN_PATH, output_dim=10) # Write to file the newly represented features. sda.write_encoded_input(filepath="../data/transformed.csv", X_TEST_PATH)
                                                                                              Community Discussions

                                                                                              Trending Discussions on Machine Learning

                                                                                              Using RNN Trained Model without pytorch installed
                                                                                              chevron right
                                                                                              Flux.jl : Customizing optimizer
                                                                                              chevron right
                                                                                              How can I check a confusion_matrix after fine-tuning with custom datasets?
                                                                                              chevron right
                                                                                              CUDA OOM - But the numbers don't add upp?
                                                                                              chevron right
                                                                                              How to compare baseline and GridSearchCV results fair?
                                                                                              chevron right
                                                                                              Getting Error 524 while running jupyter lab in google cloud platform
                                                                                              chevron right
                                                                                              TypeError: brain.NeuralNetwork is not a constructor
                                                                                              chevron right
                                                                                              Ordinal Encoding or One-Hot-Encoding
                                                                                              chevron right
                                                                                              How to increase dimension-vector size of BERT sentence-transformers embedding
                                                                                              chevron right
                                                                                              How to identify what features affect predictions result?
                                                                                              chevron right

                                                                                              QUESTION

                                                                                              Using RNN Trained Model without pytorch installed
                                                                                              Asked 2022-Feb-28 at 20:17

                                                                                              I have trained an RNN model with pytorch. I need to use the model for prediction in an environment where I'm unable to install pytorch because of some strange dependency issue with glibc. However, I can install numpy and scipy and other libraries. So, I want to use the trained model, with the network definition, without pytorch.

                                                                                              I have the weights of the model as I save the model with its state dict and weights in the standard way, but I can also save it using just json/pickle files or similar.

                                                                                              I also have the network definition, which depends on pytorch in a number of ways. This is my RNN network definition.

                                                                                              import torch
                                                                                              import torch.nn as nn
                                                                                              import torch.nn.functional as F
                                                                                              import torch.optim as optim
                                                                                              import random
                                                                                              
                                                                                              torch.manual_seed(1)
                                                                                              random.seed(1)
                                                                                              device = torch.device('cpu')
                                                                                              
                                                                                              class RNN(nn.Module):
                                                                                                def __init__(self, input_size, hidden_size, output_size,num_layers, matching_in_out=False, batch_size=1):
                                                                                                  super(RNN, self).__init__()
                                                                                                  self.input_size = input_size
                                                                                                  self.hidden_size = hidden_size
                                                                                                  self.output_size = output_size
                                                                                                  self.num_layers = num_layers
                                                                                                  self.batch_size = batch_size
                                                                                                  self.matching_in_out = matching_in_out #length of input vector matches the length of output vector 
                                                                                                  self.lstm = nn.LSTM(input_size, hidden_size,num_layers)
                                                                                                  self.hidden2out = nn.Linear(hidden_size, output_size)
                                                                                                  self.hidden = self.init_hidden()
                                                                                                def forward(self, feature_list):
                                                                                                  feature_list=torch.tensor(feature_list)
                                                                                                  
                                                                                                  if self.matching_in_out:
                                                                                                    lstm_out, _ = self.lstm( feature_list.view(len( feature_list), 1, -1))
                                                                                                    output_space = self.hidden2out(lstm_out.view(len( feature_list), -1))
                                                                                                    output_scores = torch.sigmoid(output_space) #we'll need to check if we need this sigmoid
                                                                                                    return output_scores #output_scores
                                                                                                  else:
                                                                                                    for i in range(len(feature_list)):
                                                                                                      cur_ft_tensor=feature_list[i]#.view([1,1,self.input_size])
                                                                                                      cur_ft_tensor=cur_ft_tensor.view([1,1,self.input_size])
                                                                                                      lstm_out, self.hidden = self.lstm(cur_ft_tensor, self.hidden)
                                                                                                      outs=self.hidden2out(lstm_out)
                                                                                                    return outs
                                                                                                def init_hidden(self):
                                                                                                  #return torch.rand(self.num_layers, self.batch_size, self.hidden_size)
                                                                                                  return (torch.rand(self.num_layers, self.batch_size, self.hidden_size).to(device),
                                                                                                          torch.rand(self.num_layers, self.batch_size, self.hidden_size).to(device))
                                                                                              

                                                                                              I am aware of this question, but I'm willing to go as low level as possible. I can work with numpy array instead of tensors, and reshape instead of view, and I don't need a device setting.

                                                                                              Based on the class definition above, what I can see here is that I only need the following components from torch to get an output from the forward function:

                                                                                              • nn.LSTM
                                                                                              • nn.Linear
                                                                                              • torch.sigmoid

                                                                                              I think I can easily implement the sigmoid function using numpy. However, can I have some implementation for the nn.LSTM and nn.Linear using something not involving pytorch? Also, how will I use the weights from the state dict into the new class?

                                                                                              So, the question is, how can I "translate" this RNN definition into a class that doesn't need pytorch, and how to use the state dict weights for it? Alternatively, is there a "light" version of pytorch, that I can use just to run the model and yield a result?

                                                                                              EDIT

                                                                                              I think it might be useful to include the numpy/scipy equivalent for both nn.LSTM and nn.linear. It would help us compare the numpy output to torch output for the same code, and give us some modular code/functions to use. Specifically, a numpy equivalent for the following would be great:

                                                                                              rnn = nn.LSTM(10, 20, 2)
                                                                                              input = torch.randn(5, 3, 10)
                                                                                              h0 = torch.randn(2, 3, 20)
                                                                                              c0 = torch.randn(2, 3, 20)
                                                                                              output, (hn, cn) = rnn(input, (h0, c0))
                                                                                              

                                                                                              and also for linear:

                                                                                              m = nn.Linear(20, 30)
                                                                                              input = torch.randn(128, 20)
                                                                                              output = m(input)
                                                                                              

                                                                                              ANSWER

                                                                                              Answered 2022-Feb-17 at 10:47

                                                                                              You should try to export the model using torch.onnx. The page gives you an example that you can start with.

                                                                                              An alternative is to use TorchScript, but that requires torch libraries.

                                                                                              Both of these can be run without python. You can load torchscript in a C++ application https://pytorch.org/tutorials/advanced/cpp_export.html

                                                                                              ONNX is much more portable and you can use in languages such as C#, Java, or Javascript https://onnxruntime.ai/ (even on the browser)

                                                                                              A running example

                                                                                              Just modifying a little your example to go over the errors I found

                                                                                              Notice that via tracing any if/elif/else, for, while will be unrolled

                                                                                              import torch
                                                                                              import torch.nn as nn
                                                                                              import torch.nn.functional as F
                                                                                              import torch.optim as optim
                                                                                              import random
                                                                                              
                                                                                              torch.manual_seed(1)
                                                                                              random.seed(1)
                                                                                              device = torch.device('cpu')
                                                                                              
                                                                                              class RNN(nn.Module):
                                                                                                def __init__(self, input_size, hidden_size, output_size,num_layers, matching_in_out=False, batch_size=1):
                                                                                                  super(RNN, self).__init__()
                                                                                                  self.input_size = input_size
                                                                                                  self.hidden_size = hidden_size
                                                                                                  self.output_size = output_size
                                                                                                  self.num_layers = num_layers
                                                                                                  self.batch_size = batch_size
                                                                                                  self.matching_in_out = matching_in_out #length of input vector matches the length of output vector 
                                                                                                  self.lstm = nn.LSTM(input_size, hidden_size,num_layers)
                                                                                                  self.hidden2out = nn.Linear(hidden_size, output_size)
                                                                                                def forward(self, x, h0, c0):
                                                                                                  lstm_out, (hidden_a, hidden_b) = self.lstm(x, (h0, c0))
                                                                                                  outs=self.hidden2out(lstm_out)
                                                                                                  return outs, (hidden_a, hidden_b)
                                                                                                def init_hidden(self):
                                                                                                  #return torch.rand(self.num_layers, self.batch_size, self.hidden_size)
                                                                                                  return (torch.rand(self.num_layers, self.batch_size, self.hidden_size).to(device).detach(),
                                                                                                          torch.rand(self.num_layers, self.batch_size, self.hidden_size).to(device).detach())
                                                                                              
                                                                                              # convert the arguments passed during onnx.export call
                                                                                              class MWrapper(nn.Module):
                                                                                                  def __init__(self, model):
                                                                                                      super(MWrapper, self).__init__()
                                                                                                      self.model = model;
                                                                                                  def forward(self, kwargs):
                                                                                                      return self.model(**kwargs)
                                                                                              

                                                                                              Run an example

                                                                                              rnn = RNN(10, 10, 10, 3)
                                                                                              X = torch.randn(3,1,10)
                                                                                              h0,c0  = rnn.init_hidden()
                                                                                              print(rnn(X, h0, c0)[0])
                                                                                              

                                                                                              Use the same input to trace the model and export an onnx file

                                                                                              
                                                                                              torch.onnx.export(MWrapper(rnn), {'x':X,'h0':h0,'c0':c0}, 'rnn.onnx', 
                                                                                                                dynamic_axes={'x':{1:'N'},
                                                                                                                             'c0':{1: 'N'},
                                                                                                                             'h0':{1: 'N'}
                                                                                                                             },
                                                                                                                input_names=['x', 'h0', 'c0'],
                                                                                                                output_names=['y', 'hn', 'cn']
                                                                                                               )
                                                                                              

                                                                                              Notice that you can use symbolic values for the dimensions of some axes of some inputs. Unspecified dimensions will be fixed with the values from the traced inputs. By default LSTM uses dimension 1 as batch.

                                                                                              Next we load the ONNX model and pass the same inputs

                                                                                              import onnxruntime
                                                                                              ort_model = onnxruntime.InferenceSession('rnn.onnx')
                                                                                              print(ort_model.run(['y'], {'x':X.numpy(), 'c0':c0.numpy(), 'h0':h0.numpy()}))
                                                                                              

                                                                                              Source https://stackoverflow.com/questions/71146140

                                                                                              QUESTION

                                                                                              Flux.jl : Customizing optimizer
                                                                                              Asked 2022-Jan-25 at 07:58

                                                                                              I'm trying to implement a gradient-free optimizer function to train convolutional neural networks with Julia using Flux.jl. The reference paper is this: https://arxiv.org/abs/2005.05955. This paper proposes RSO, a gradient-free optimization algorithm updates single weight at a time on a sampling bases. The pseudocode of this algorithm is depicted in the picture below.

                                                                                              optimizer_pseudocode

                                                                                              I'm using MNIST dataset.

                                                                                              function train(; kws...)
                                                                                              args = Args(; kws...) # collect options in a stuct for convinience
                                                                                              
                                                                                              if CUDA.functional() && args.use_cuda
                                                                                                  @info "Training on CUDA GPU"
                                                                                                  CUDA.allwoscalar(false)
                                                                                                  device = gpu
                                                                                              else
                                                                                                  @info "Training on CPU"
                                                                                                  device = cpu
                                                                                              end
                                                                                              
                                                                                              # Prepare datasets
                                                                                              x_train, x_test, y_train, y_test = getdata(args, device)
                                                                                              
                                                                                              # Create DataLoaders (mini-batch iterators)
                                                                                              train_loader = DataLoader((x_train, y_train), batchsize=args.batchsize, shuffle=true)
                                                                                              test_loader = DataLoader((x_test, y_test), batchsize=args.batchsize)
                                                                                              
                                                                                              # Construct model
                                                                                              model = build_model() |> device
                                                                                              ps = Flux.params(model) # model's trainable parameters
                                                                                              
                                                                                              best_param = ps
                                                                                              if args.optimiser == "SGD"
                                                                                                  # Regular training step with SGD
                                                                                              
                                                                                              elseif args.optimiser == "RSO"
                                                                                                  # Run RSO function and update ps
                                                                                                  best_param .= RSO(x_train, y_train, args.RSOupdate, model, args.batchsize, device)
                                                                                              end
                                                                                              

                                                                                              And the corresponding RSO function:

                                                                                              function RSO(X,L,C,model, batch_size, device)
                                                                                              """
                                                                                              model = convolutional model structure
                                                                                              X = Input data
                                                                                              L = labels
                                                                                              C = Number of rounds to update parameters
                                                                                              W = Weight set of layers
                                                                                              Wd = Weight tensors of layer d that generates an activation
                                                                                              wid = weight tensor that generates an activation aᵢ
                                                                                              wj = a weight in wid
                                                                                              """
                                                                                              
                                                                                              # Normalize input data to have zero mean and unit standard deviation
                                                                                              X .= (X .- sum(X))./std(X)
                                                                                              train_loader = DataLoader((X, L), batchsize=batch_size, shuffle=true)
                                                                                              
                                                                                              #println("model = $(typeof(model))")
                                                                                              
                                                                                              std_prep = []
                                                                                              σ_d = Float64[]
                                                                                              D = 1
                                                                                              for layer in model
                                                                                                  D += 1
                                                                                                  Wd = Flux.params(layer)
                                                                                                  # Initialize the weights of the network with Gaussian distribution
                                                                                                  for id in Wd
                                                                                                      wj = convert(Array{Float32, 4}, rand(Normal(0, sqrt(2/length(id))), (3,3,4,4)))
                                                                                                      id = wj
                                                                                                      append!(std_prep, vec(wj))
                                                                                                  end
                                                                                                  # Compute std of all elements in the weight tensor Wd
                                                                                                  push!(σ_d, std(std_prep))
                                                                                              end
                                                                                              
                                                                                              W = Flux.params(model)
                                                                                              
                                                                                              # Weight update
                                                                                              for _ in 1:C
                                                                                                  d = D
                                                                                                  while d > 0
                                                                                                      for id in 1:length(W[d])
                                                                                                          # Randomly sample change in weights from Gaussian distribution
                                                                                                          for j in 1:length(w[d][id])
                                                                                                              # Randomly sample mini-batch
                                                                                                              (x, l) = train_loader[rand(1:length(train_loader))]
                                                                                                              
                                                                                                              # Sample a weight from normal distribution
                                                                                                              ΔWj[d][id][j] = rand(Normal(0, σ_d[d]), 1)
                                                                                              
                                                                                                              loss, acc = loss_and_accuracy(data_loader, model, device)
                                                                                                              W = argmin(F(x,l, W+ΔWj), F(x,l,W), F(x,l, W-ΔWj))
                                                                                                          end
                                                                                                      end
                                                                                                      d -= 1
                                                                                                  end
                                                                                              end
                                                                                              
                                                                                              return W
                                                                                              end
                                                                                              

                                                                                              The problem here is the second block of the RSO function. I'm trying to evaluate the loss with the change of single weight in three scenarios, which are F(w, l, W+gW), F(w, l, W), F(w, l, W-gW), and choose the weight-set with minimum loss. But how do I do that using Flux.jl? The loss function I'm trying to use is logitcrossentropy(ŷ, y, agg=sum). In order to generate y_hat, we should use model(W), but changing single weight parameter in Zygote.Params() form was already challenging....

                                                                                              ANSWER

                                                                                              Answered 2022-Jan-14 at 23:47

                                                                                              Based on the paper you shared, it looks like you need to change the weight arrays per each output neuron per each layer. Unfortunately, this means that the implementation of your optimization routine is going to depend on the layer type, since an "output neuron" for a convolution layer is quite different than a fully-connected layer. In other words, just looping over Flux.params(model) is not going to be sufficient, since this is just a set of all the weight arrays in the model and each weight array is treated differently depending on which layer it comes from.

                                                                                              Fortunately, Julia's multiple dispatch does make this easier to write if you use separate functions instead of a giant loop. I'll summarize the algorithm using the pseudo-code below:

                                                                                              for layer in model
                                                                                                for output_neuron in layer
                                                                                                  for weight_element in parameters(output_neuron)
                                                                                                    weight_element = sample(N(0, sqrt(2 / num_outputs(layer))))
                                                                                                  end
                                                                                                end
                                                                                                sigmas[layer] = stddev(parameters(layer))
                                                                                              end
                                                                                              
                                                                                              for c in 1 to C
                                                                                                for layer in reverse(model)
                                                                                                  for output_neuron in layer
                                                                                                    for weight_element in parameters(output_neuron)
                                                                                                      x, y = sample(batches)
                                                                                                      dw = N(0, sigmas[layer])
                                                                                                      # optimize weights
                                                                                                    end
                                                                                                  end
                                                                                                end
                                                                                              end
                                                                                              

                                                                                              It's the for output_neuron ... portions that we need to isolate into separate functions.

                                                                                              In the first block, we don't actually do anything different to every weight_element, they are all sampled from the same normal distribution. So, we don't actually need to iterate the output neurons, but we do need to know how many there are.

                                                                                              using Statistics: std
                                                                                              
                                                                                              # this function will set the weights according to the
                                                                                              # normal distribution and the number of output neurons
                                                                                              # it also returns the standard deviation of the weights
                                                                                              function sample_weight!(layer::Dense)
                                                                                                sample = randn(eltype(layer.weight), size(layer.weight))
                                                                                                num_outputs = size(layer.weight, 1)
                                                                                                # notice the "." notation which is used to mutate the array
                                                                                                layer.weight .= sample .* num_outputs
                                                                                              
                                                                                                return std(layer.weight)
                                                                                              end
                                                                                              
                                                                                              function sample_weight!(layer::Conv)
                                                                                                sample = randn(eltype(layer.weight), size(layer.weight))
                                                                                                num_outputs = size(layer.weight, 4)
                                                                                                # notice the "." notation which is used to mutate the array
                                                                                                layer.weight .= sample .* num_outputs
                                                                                              
                                                                                                return std(layer.weight)
                                                                                              end
                                                                                              
                                                                                              sigmas = map(sample_weights!, model)
                                                                                              

                                                                                              Now, for the second block, we will do a similar trick by defining different functions for each layer.

                                                                                              function optimize_layer!(loss, layer::Dense, data, sigma)
                                                                                                for i in 1:size(layer.weight, 1)
                                                                                                  for j in 1:size(layer.weight, 2)
                                                                                                    wj = layer.weight[i, j]
                                                                                                    x, y = data[rand(1:length(data))]
                                                                                                    dw = randn() * sigma
                                                                                                    ws = [wj + dw, wj, wj - dw]
                                                                                                    losses = Float32[]
                                                                                                    for (k, w) in enumerate(ws)
                                                                                                      layer.weight[i, j] = w
                                                                                                      losses[k] = loss(x, y)
                                                                                                    end
                                                                                                    layer.weight[i, j] = ws[argmin(losses)]
                                                                                                  end
                                                                                                end
                                                                                              end
                                                                                              
                                                                                              function optimize_layer!(loss, layer::Conv, data, sigma)
                                                                                                for i in 1:size(layer.weight, 4)
                                                                                                  # we use a view to reference the full kernel
                                                                                                  # for this output channel
                                                                                                  wid = view(layer.weight, :, :, :, i)
                                                                                                  
                                                                                                  # each index let's us treat wid like a vector
                                                                                                  for j in eachindex(wid)
                                                                                                    wj = wid[j]
                                                                                                    x, y = data[rand(1:length(data))]
                                                                                                    dw = randn() * sigma
                                                                                                    ws = [wj + dw, wj, wj - dw]
                                                                                                    losses = Float32[]
                                                                                                    for (k, w) in enumerate(ws)
                                                                                                      wid[j] = w
                                                                                                      losses[k] = loss(x, y)
                                                                                                    end
                                                                                                    wid[j] = ws[argmin(losses)]
                                                                                                  end
                                                                                                end
                                                                                              end
                                                                                              
                                                                                              for c in 1:C
                                                                                                for (layer, sigma) in reverse(zip(model, sigmas))
                                                                                                  optimize_layer!(layer, data, sigma) do x, y
                                                                                                    logitcrossentropy(model(x), y; agg = sum)
                                                                                                  end
                                                                                                end
                                                                                              end
                                                                                              

                                                                                              Notice that nowhere did I use Flux.params which does not help us here. Also, Flux.params would include both the weight and bias, and the paper doesn't look like it bothers with the bias at all. If you had an optimization method that generically optimized any parameter regardless of layer type the same (i.e. like gradient descent), then you could use for p in Flux.params(model) ....

                                                                                              Source https://stackoverflow.com/questions/70641453

                                                                                              QUESTION

                                                                                              How can I check a confusion_matrix after fine-tuning with custom datasets?
                                                                                              Asked 2021-Nov-24 at 13:26

                                                                                              This question is the same with How can I check a confusion_matrix after fine-tuning with custom datasets?, on Data Science Stack Exchange.

                                                                                              Background

                                                                                              I would like to check a confusion_matrix, including precision, recall, and f1-score like below after fine-tuning with custom datasets.

                                                                                              Fine tuning process and the task are Sequence Classification with IMDb Reviews on the Fine-tuning with custom datasets tutorial on Hugging face.

                                                                                              After finishing the fine-tune with Trainer, how can I check a confusion_matrix in this case?

                                                                                              An image of confusion_matrix, including precision, recall, and f1-score original site: just for example output image

                                                                                              predictions = np.argmax(trainer.test(test_x), axis=1)
                                                                                              
                                                                                              # Confusion matrix and classification report.
                                                                                              print(classification_report(test_y, predictions))
                                                                                              
                                                                                                          precision    recall  f1-score   support
                                                                                              
                                                                                                        0       0.75      0.79      0.77      1000
                                                                                                        1       0.81      0.87      0.84      1000
                                                                                                        2       0.63      0.61      0.62      1000
                                                                                                        3       0.55      0.47      0.50      1000
                                                                                                        4       0.66      0.66      0.66      1000
                                                                                                        5       0.62      0.64      0.63      1000
                                                                                                        6       0.74      0.83      0.78      1000
                                                                                                        7       0.80      0.74      0.77      1000
                                                                                                        8       0.85      0.81      0.83      1000
                                                                                                        9       0.79      0.80      0.80      1000
                                                                                              
                                                                                              avg / total       0.72      0.72      0.72     10000
                                                                                              
                                                                                              Code
                                                                                              from transformers import DistilBertForSequenceClassification, Trainer, TrainingArguments
                                                                                              
                                                                                              training_args = TrainingArguments(
                                                                                                  output_dir='./results',          # output directory
                                                                                                  num_train_epochs=3,              # total number of training epochs
                                                                                                  per_device_train_batch_size=16,  # batch size per device during training
                                                                                                  per_device_eval_batch_size=64,   # batch size for evaluation
                                                                                                  warmup_steps=500,                # number of warmup steps for learning rate scheduler
                                                                                                  weight_decay=0.01,               # strength of weight decay
                                                                                                  logging_dir='./logs',            # directory for storing logs
                                                                                                  logging_steps=10,
                                                                                              )
                                                                                              
                                                                                              model = DistilBertForSequenceClassification.from_pretrained("distilbert-base-uncased")
                                                                                              
                                                                                              trainer = Trainer(
                                                                                                  model=model,                         # the instantiated 🤗 Transformers model to be trained
                                                                                                  args=training_args,                  # training arguments, defined above
                                                                                                  train_dataset=train_dataset,         # training dataset
                                                                                                  eval_dataset=val_dataset             # evaluation dataset
                                                                                              )
                                                                                              
                                                                                              trainer.train()
                                                                                              
                                                                                              What I did so far

                                                                                              Data set Preparation for Sequence Classification with IMDb Reviews, and I'm fine-tuning with Trainer.

                                                                                              from pathlib import Path
                                                                                              
                                                                                              def read_imdb_split(split_dir):
                                                                                                  split_dir = Path(split_dir)
                                                                                                  texts = []
                                                                                                  labels = []
                                                                                                  for label_dir in ["pos", "neg"]:
                                                                                                      for text_file in (split_dir/label_dir).iterdir():
                                                                                                          texts.append(text_file.read_text())
                                                                                                          labels.append(0 if label_dir is "neg" else 1)
                                                                                              
                                                                                                  return texts, labels
                                                                                              
                                                                                              train_texts, train_labels = read_imdb_split('aclImdb/train')
                                                                                              test_texts, test_labels = read_imdb_split('aclImdb/test')
                                                                                              
                                                                                              from sklearn.model_selection import train_test_split
                                                                                              train_texts, val_texts, train_labels, val_labels = train_test_split(train_texts, train_labels, test_size=.2)
                                                                                              
                                                                                              from transformers import DistilBertTokenizerFast
                                                                                              tokenizer = DistilBertTokenizerFast.from_pretrained('distilbert-base-uncased')
                                                                                              
                                                                                              train_encodings = tokenizer(train_texts, truncation=True, padding=True)
                                                                                              val_encodings = tokenizer(val_texts, truncation=True, padding=True)
                                                                                              test_encodings = tokenizer(test_texts, truncation=True, padding=True)
                                                                                              
                                                                                              import torch
                                                                                              
                                                                                              class IMDbDataset(torch.utils.data.Dataset):
                                                                                                  def __init__(self, encodings, labels):
                                                                                                      self.encodings = encodings
                                                                                                      self.labels = labels
                                                                                              
                                                                                                  def __getitem__(self, idx):
                                                                                                      item = {key: torch.tensor(val[idx]) for key, val in self.encodings.items()}
                                                                                                      item['labels'] = torch.tensor(self.labels[idx])
                                                                                                      return item
                                                                                              
                                                                                                  def __len__(self):
                                                                                                      return len(self.labels)
                                                                                              
                                                                                              train_dataset = IMDbDataset(train_encodings, train_labels)
                                                                                              val_dataset = IMDbDataset(val_encodings, val_labels)
                                                                                              test_dataset = IMDbDataset(test_encodings, test_labels)
                                                                                              

                                                                                              ANSWER

                                                                                              Answered 2021-Nov-24 at 13:26

                                                                                              What you could do in this situation is to iterate on the validation set(or on the test set for that matter) and manually create a list of y_true and y_pred.

                                                                                              import torch
                                                                                              import torch.nn.functional as F
                                                                                              from sklearn import metrics
                                                                                               
                                                                                              y_preds = []
                                                                                              y_trues = []
                                                                                              for index,val_text in enumerate(val_texts):
                                                                                                   tokenized_val_text = tokenizer([val_text], 
                                                                                                                                  truncation=True,
                                                                                                                                  padding=True,
                                                                                                                                  return_tensor='pt')
                                                                                                   logits = model(tokenized_val_text)
                                                                                                   prediction = F.softmax(logits, dim=1)
                                                                                                   y_pred = torch.argmax(prediction).numpy()
                                                                                                   y_true = val_labels[index]
                                                                                                   y_preds.append(y_pred)
                                                                                                   y_trues.append(y_true)
                                                                                              

                                                                                              Finally,

                                                                                              confusion_matrix = metrics.confusion_matrix(y_trues, y_preds, labels=["neg", "pos"]))
                                                                                              print(confusion_matrix)
                                                                                              

                                                                                              Observations:

                                                                                              1. The output of the model are the logits, not the probabilities normalized.
                                                                                              2. As such, we apply softmax on dimension one to transform to actual probabilities (e.g. 0.2% class 0, 0.8% class 1).
                                                                                              3. We apply the .argmax() operation to get the index of the class.

                                                                                              Source https://stackoverflow.com/questions/68691450

                                                                                              QUESTION

                                                                                              CUDA OOM - But the numbers don't add upp?
                                                                                              Asked 2021-Nov-23 at 06:13

                                                                                              I am trying to train a model using PyTorch. When beginning model training I get the following error message:

                                                                                              RuntimeError: CUDA out of memory. Tried to allocate 5.37 GiB (GPU 0; 7.79 GiB total capacity; 742.54 MiB already allocated; 5.13 GiB free; 792.00 MiB reserved in total by PyTorch)

                                                                                              I am wondering why this error is occurring. From the way I see it, I have 7.79 GiB total capacity. The numbers it is stating (742 MiB + 5.13 GiB + 792 MiB) do not add up to be greater than 7.79 GiB. When I check nvidia-smi I see these processes running

                                                                                              |    0   N/A  N/A      1047      G   /usr/lib/xorg/Xorg                168MiB |
                                                                                              |    0   N/A  N/A      5521      G   /usr/lib/xorg/Xorg                363MiB |
                                                                                              |    0   N/A  N/A      5637      G   /usr/bin/gnome-shell              161MiB |
                                                                                              

                                                                                              I realize that summing all of these numbers might cut it close (168 + 363 + 161 + 742 + 792 + 5130 = 7356 MiB) but this is still less than the stated capacity of my GPU.

                                                                                              ANSWER

                                                                                              Answered 2021-Nov-23 at 06:13

                                                                                              This is more of a comment, but worth pointing out.

                                                                                              The reason in general is indeed what talonmies commented, but you are summing up the numbers incorrectly. Let's see what happens when tensors are moved to GPU (I tried this on my PC with RTX2060 with 5.8G usable GPU memory in total):

                                                                                              Let's run the following python commands interactively:

                                                                                              Python 3.8.10 (default, Sep 28 2021, 16:10:42) 
                                                                                              [GCC 9.3.0] on linux
                                                                                              Type "help", "copyright", "credits" or "license" for more information.
                                                                                              >>> import torch
                                                                                              >>> a = torch.zeros(1).cuda()
                                                                                              >>> b = torch.zeros(500000000).cuda()
                                                                                              >>> c = torch.zeros(500000000).cuda()
                                                                                              >>> d = torch.zeros(500000000).cuda()
                                                                                              

                                                                                              The following are the outputs of watch -n.1 nvidia-smi:

                                                                                              Right after torch import:

                                                                                              |    0   N/A  N/A      1121      G   /usr/lib/xorg/Xorg                  4MiB |
                                                                                              

                                                                                              Right after the creation of a:

                                                                                              |    0   N/A  N/A      1121      G   /usr/lib/xorg/Xorg                  4MiB |
                                                                                              |    0   N/A  N/A     14701      C   python                           1251MiB |
                                                                                              

                                                                                              As you can see, you need 1251MB to get pytorch to start using CUDA, even if you only need a single float.

                                                                                              Right after the creation of b:

                                                                                              |    0   N/A  N/A      1121      G   /usr/lib/xorg/Xorg                  4MiB |
                                                                                              |    0   N/A  N/A     14701      C   python                           3159MiB |
                                                                                              

                                                                                              b needs 500000000*4 bytes = 1907MB, this is the same as the increment in memory used by the python process.

                                                                                              Right after the creation of c:

                                                                                              |    0   N/A  N/A      1121      G   /usr/lib/xorg/Xorg                  4MiB |
                                                                                              |    0   N/A  N/A     14701      C   python                           5067MiB |
                                                                                              

                                                                                              No surprise here.

                                                                                              Right after the creation of d:

                                                                                              |    0   N/A  N/A      1121      G   /usr/lib/xorg/Xorg                  4MiB |
                                                                                              |    0   N/A  N/A     14701      C   python                           5067MiB |
                                                                                              

                                                                                              No further memory allocation, and the OOM error is thrown:

                                                                                              Traceback (most recent call last):
                                                                                                File "", line 1, in 
                                                                                              RuntimeError: CUDA out of memory. Tried to allocate 1.86 GiB (GPU 0; 5.80 GiB total capacity; 3.73 GiB already allocated; 858.81 MiB free; 3.73 GiB reserved in total by PyTorch)
                                                                                              

                                                                                              Obviously:

                                                                                              • The "already allocated" part is included in the "reserved in total by PyTorch" part. You can't sum them up, otherwise the sum exceeds the total available memory.
                                                                                              • The minimum memory required to get pytorch running on GPU (1251M) is not included in the "reserved in total" part.

                                                                                              So in your case, the sum should consist of:

                                                                                              • 792MB (reserved in total)
                                                                                              • 1251MB (minimum to get pytorch running on GPU, assuming this is the same for both of us)
                                                                                              • 5.13GB (free)
                                                                                              • 168+363+161=692MB (other processes)

                                                                                              They sum up to approximately 7988MB=7.80GB, which is exactly you total GPU memory.

                                                                                              Source https://stackoverflow.com/questions/70074789

                                                                                              QUESTION

                                                                                              How to compare baseline and GridSearchCV results fair?
                                                                                              Asked 2021-Nov-04 at 21:17

                                                                                              I am a bit confusing with comparing best GridSearchCV model and baseline.
                                                                                              For example, we have classification problem.
                                                                                              As a baseline, we'll fit a model with default settings (let it be logistic regression):

                                                                                              from sklearn.linear_model import LogisticRegression
                                                                                              from sklearn.metrics import accuracy_score
                                                                                              baseline = LogisticRegression()
                                                                                              baseline.fit(X_train, y_train)
                                                                                              pred = baseline.predict(X_train)
                                                                                              print(accuracy_score(y_train, pred))
                                                                                              

                                                                                              So, the baseline gives us accuracy using the whole train sample.
                                                                                              Next, GridSearchCV:

                                                                                              from sklearn.model_selection import cross_val_score, GridSearchCV, StratifiedKFold
                                                                                              X_val, X_test_val,y_val,y_test_val = train_test_split(X_train, y_train, test_size=0.3, random_state=42)
                                                                                              cv = StratifiedKFold(n_splits=5, random_state=0, shuffle=True)
                                                                                              parameters = [ ... ]
                                                                                              best_model = GridSearchCV(LogisticRegression(parameters,scoring='accuracy' ,cv=cv))
                                                                                              best_model.fit(X_val, y_val)
                                                                                              print(best_model.best_score_)
                                                                                              

                                                                                              Here, we have accuracy based on validation sample.

                                                                                              My questions are:

                                                                                              1. Are those accuracy scores comparable? Generally, is it fair to compare GridSearchCV and model without any cross validation?
                                                                                              2. For the baseline, isn't it better to use Validation sample too (instead of the whole Train sample)?

                                                                                              ANSWER

                                                                                              Answered 2021-Nov-04 at 21:17

                                                                                              No, they aren't comparable.

                                                                                              Your baseline model used X_train to fit the model. Then you're using the fitted model to score the X_train sample. This is like cheating because the model is going to already perform the best since you're evaluating it based on data that it has already seen.

                                                                                              The grid searched model is at a disadvantage because:

                                                                                              1. It's working with less data since you have split the X_train sample.
                                                                                              2. Compound that with the fact that it's getting trained with even less data due to the 5 folds (it's training with only 4/5 of X_val per fold).

                                                                                              So your score for the grid search is going to be worse than your baseline.

                                                                                              Now you might ask, "so what's the point of best_model.best_score_? Well, that score is used to compare all the models used when searching for the optimal hyperparameters in your search space, but in no way should be used to compare against a model that was trained outside of the grid search context.

                                                                                              So how should one go about conducting a fair comparison?

                                                                                              1. Split your training data for both models.
                                                                                              X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
                                                                                              
                                                                                              1. Fit your models using X_train.
                                                                                              # fit baseline
                                                                                              baseline.fit(X_train, y_train)
                                                                                              
                                                                                              # fit using grid search
                                                                                              best_model.fit(X_train, y_train)
                                                                                              
                                                                                              1. Evaluate models against X_test.
                                                                                              # baseline
                                                                                              baseline_pred = baseline.predict(X_test)
                                                                                              print(accuracy_score(y_test,  baseline_pred))
                                                                                              
                                                                                              # grid search
                                                                                              grid_pred = best_model.predict(X_test)
                                                                                              print(accuracy_score(y_test, grid_pred))
                                                                                              

                                                                                              Source https://stackoverflow.com/questions/69844028

                                                                                              QUESTION

                                                                                              Getting Error 524 while running jupyter lab in google cloud platform
                                                                                              Asked 2021-Oct-15 at 02:14

                                                                                              I am not able to access jupyter lab created on google cloud

                                                                                              I created one notebook using Google AI platform. I was able to start it and work but suddenly it stopped and I am not able to start it now. I tried building and restarting the jupyterlab, but of no use. I have checked my disk usages as well, which is only 12%.

                                                                                              I tried the diagnostic tool, which gave the following result:

                                                                                              but didn't fix it.

                                                                                              Thanks in advance.

                                                                                              ANSWER

                                                                                              Answered 2021-Aug-20 at 14:00

                                                                                              QUESTION

                                                                                              TypeError: brain.NeuralNetwork is not a constructor
                                                                                              Asked 2021-Sep-29 at 22:47

                                                                                              I am new to Machine Learning.

                                                                                              Having followed the steps in this simple Maching Learning using the Brain.js library, it beats my understanding why I keep getting the error message below:

                                                                                              I have double-checked my code multiple times. This is particularly frustrating as this is the very first exercise!

                                                                                              Kindly point out what I am missing here!

                                                                                              Find below my code:

                                                                                              const brain = require('brain.js');
                                                                                              
                                                                                              var net = new brain.NeuralNetwork();
                                                                                              
                                                                                              net.train([
                                                                                                { input: [0, 0], output: [0] },
                                                                                                { input: [0, 1], output: [1] },
                                                                                                { input: [1, 0], output: [1] },
                                                                                                { input: [1, 1], output: [0] },
                                                                                              ]);
                                                                                              
                                                                                              var output = net.run([1, 0]); // [0.987]
                                                                                              
                                                                                              console.log(output);
                                                                                              

                                                                                              I am running Nodejs version v14.17.4

                                                                                              ANSWER

                                                                                              Answered 2021-Sep-29 at 22:47

                                                                                              Turns out its just documented incorrectly.

                                                                                              In reality the export from brain.js is this:

                                                                                              {
                                                                                                brain: { ...brain class },
                                                                                                default: { ...brain class again }
                                                                                              }
                                                                                              

                                                                                              So in order to get it working properly, you should do

                                                                                              const brain = require('brain.js').brain // access to nested object
                                                                                              const net = new brain.NeuralNetwork()
                                                                                              

                                                                                              Source https://stackoverflow.com/questions/69348213

                                                                                              QUESTION

                                                                                              Ordinal Encoding or One-Hot-Encoding
                                                                                              Asked 2021-Sep-04 at 06:43

                                                                                              IF we are not sure about the nature of categorical features like whether they are nominal or ordinal, which encoding should we use? Ordinal-Encoding or One-Hot-Encoding? Is there a clearly defined rule on this topic?

                                                                                              I see a lot of people using Ordinal-Encoding on Categorical Data that doesn't have a Direction. Suppose a frequency table:

                                                                                              some_data[some_col].value_counts()
                                                                                              [OUTPUT]
                                                                                              color_white    11413
                                                                                              color_green     4544
                                                                                              color_black     1419
                                                                                              color_orang        3
                                                                                              Name: shirt_colors, dtype: int64
                                                                                              

                                                                                              There are a lots of guys who are preferring to do Ordinal-Encoding on this column. And I am hell-bent to go with One-Hot-Encoding. My view on this is that doing Ordinal Encoding will allot these colors' some ordered numbers which I'd imply a ranking. And there is no ranking in the first place. In other words, my model should not be thinking of color_white to be 4 and color_orang to be 0 or 1 or 2. Keep in mind that there is no hint of any ranking or order in the Data Description as well.

                                                                                              I have the following understanding of this topic:

                                                                                              Numbers that neither have a direction nor magnitude are Nominal Variables. For example, fruit_list =['apple', 'orange', banana']. Unless there is a specific context, this set would be called to be a nominal one. And for such variables, we should perform either get_dummies or one-hot-encoding

                                                                                              Whereas the Ordinal Variables have a direction. For example, shirt_sizes_list = [large, medium, small]. These variables are called Ordinal Variables. If the same fruit list has a context behind it, like price or nutritional value i-e, that could give the fruits in the fruit_list some ranking or order, we'd call it an Ordinal Variable. And for Ordinal Variables, we perform Ordinal-Encoding

                                                                                              Is my understanding correct? Kindly provide your feedback This topic has turned into a nightmare Thank you!

                                                                                              ANSWER

                                                                                              Answered 2021-Sep-04 at 06:43

                                                                                              You're right. Just one thing to consider for choosing OrdinalEncoder or OneHotEncoder is that does the order of data matter?

                                                                                              Most ML algorithms will assume that two nearby values are more similar than two distant values. This may be fine in some cases e.g., for ordered categories such as:

                                                                                              • quality = ["bad", "average", "good", "excellent"] or
                                                                                              • shirt_size = ["large", "medium", "small"]

                                                                                              but it is obviously not the case for the:

                                                                                              • color = ["white","orange","black","green"]

                                                                                              column (except for the cases you need to consider a spectrum, say from white to black. Note that in this case, white category should be encoded as 0 and black should be encoded as the highest number in your categories), or if you have some cases for example, say, categories 0 and 4 may be more similar than categories 0 and 1. To fix this issue, a common solution is to create one binary attribute per category (One-Hot encoding)

                                                                                              Source https://stackoverflow.com/questions/69052776

                                                                                              QUESTION

                                                                                              How to increase dimension-vector size of BERT sentence-transformers embedding
                                                                                              Asked 2021-Aug-15 at 13:35

                                                                                              I am using sentence-transformers for semantic search but sometimes it does not understand the contextual meaning and returns wrong result eg. BERT problem with context/semantic search in italian language

                                                                                              by default the vector side of embedding of the sentence is 78 columns, so how do I increase that dimension so that it can understand the contextual meaning in deep.

                                                                                              code:

                                                                                              # Load the BERT Model
                                                                                              from sentence_transformers import SentenceTransformer
                                                                                              model = SentenceTransformer('bert-base-nli-mean-tokens')
                                                                                              
                                                                                              # Setup a Corpus
                                                                                              # A corpus is a list with documents split by sentences.
                                                                                              
                                                                                              sentences = ['Absence of sanity', 
                                                                                                           'Lack of saneness',
                                                                                                           'A man is eating food.',
                                                                                                           'A man is eating a piece of bread.',
                                                                                                           'The girl is carrying a baby.',
                                                                                                           'A man is riding a horse.',
                                                                                                           'A woman is playing violin.',
                                                                                                           'Two men pushed carts through the woods.',
                                                                                                           'A man is riding a white horse on an enclosed ground.',
                                                                                                           'A monkey is playing drums.',
                                                                                                           'A cheetah is running behind its prey.']
                                                                                              
                                                                                              # Each sentence is encoded as a 1-D vector with 78 columns 
                                                                                              sentence_embeddings = model.encode(sentences) ### how to increase vector dimention 
                                                                                              
                                                                                              print('Sample BERT embedding vector - length', len(sentence_embeddings[0]))
                                                                                              
                                                                                              print('Sample BERT embedding vector - note includes negative values', sentence_embeddings[0])
                                                                                              

                                                                                              ANSWER

                                                                                              Answered 2021-Aug-10 at 07:39

                                                                                              Increasing the dimension of a trained model is not possible (without many difficulties and re-training the model). The model you are using was pre-trained with dimension 768, i.e., all weight matrices of the model have a corresponding number of trained parameters. Increasing the dimensionality would mean adding parameters which however need to be learned.

                                                                                              Also, the dimension of the model does not reflect the amount of semantic or context information in the sentence representation. The choice of the model dimension reflects more a trade-off between model capacity, the amount of training data, and reasonable inference speed.

                                                                                              If the model that you are using does not provide representation that is semantically rich enough, you might want to search for better models, such as RoBERTa or T5.

                                                                                              Source https://stackoverflow.com/questions/68686272

                                                                                              QUESTION

                                                                                              How to identify what features affect predictions result?
                                                                                              Asked 2021-Aug-11 at 15:55

                                                                                              I have a table with features that were used to build some model to predict whether user will buy a new insurance or not. In the same table I have probability of belonging to the class 1 (will buy) and class 0 (will not buy) predicted by this model. I don't know what kind of algorithm was used to build this model. I only have its predicted probabilities.

                                                                                              Question: how to identify what features affect these prediction results? Do I need to build correlation matrix or conduct any tests?

                                                                                              Table example:

                                                                                              +---------+-----+-----------+---------+--------+-----------+--------+---------+-------------+------------+
                                                                                              | user_id | age | car_price | car_age | income | education | gender | crashes | probability | true_labes |
                                                                                              +---------+-----+-----------+---------+--------+-----------+--------+---------+-------------+------------+
                                                                                              | 1       | 29  | 15600     | 3       | 20000  | 3         | 1      | 1       | 0.23        | 0          |
                                                                                              +---------+-----+-----------+---------+--------+-----------+--------+---------+-------------+------------+
                                                                                              | 2       | 41  | 43000     | 1       | 65000  | 2         | 0      | 1       | 0.1         | 0          |
                                                                                              +---------+-----+-----------+---------+--------+-----------+--------+---------+-------------+------------+
                                                                                              | 3       | 39  | 23500     | 5       | 43000  | 3         | 1      | 0       | 0.46        | 1          |
                                                                                              +---------+-----+-----------+---------+--------+-----------+--------+---------+-------------+------------+
                                                                                              | 4       | 19  | 12200     | 3       | 13000  | 1         | 1      | 0       | 0.34        | 1          |
                                                                                              +---------+-----+-----------+---------+--------+-----------+--------+---------+-------------+------------+
                                                                                              | 5       | 68  | 21900     | 2       | 31300  | 3         | 0      | 1       | 0.85        | 1          |
                                                                                              +---------+-----+-----------+---------+--------+-----------+--------+---------+-------------+------------+
                                                                                              

                                                                                              ANSWER

                                                                                              Answered 2021-Aug-11 at 15:55

                                                                                              You could build a model like this.

                                                                                              x = features you have. y = true_lable

                                                                                              from that you can extract features importance. also, if you want to go the extra mile,you can do Bootstrapping, so that the features importance would be more stable (statistical).

                                                                                              Source https://stackoverflow.com/questions/68744565

                                                                                              Community Discussions, Code Snippets contain sources that include Stack Exchange Network

                                                                                              Vulnerabilities

                                                                                              No vulnerabilities reported

                                                                                              Install Deep-Learning-Papers-Reading-Roadmap

                                                                                              You can download it from GitHub.
                                                                                              You can use Deep-Learning-Papers-Reading-Roadmap like any standard Python library. You will need to make sure that you have a development environment consisting of a Python distribution including header files, a compiler, pip, and git installed. Make sure that your pip, setuptools, and wheel are up to date. When using pip it is generally recommended to install packages in a virtual environment to avoid changes to the system.

                                                                                              Support

                                                                                              For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .
                                                                                              Find more information at:
                                                                                              Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
                                                                                              Find more libraries
                                                                                              Explore Kits - Develop, implement, customize Projects, Custom Functions and Applications with kandi kits​
                                                                                              Save this library and start creating your kit
                                                                                              CLONE
                                                                                            • HTTPS

                                                                                              https://github.com/floodsung/Deep-Learning-Papers-Reading-Roadmap.git

                                                                                            • CLI

                                                                                              gh repo clone floodsung/Deep-Learning-Papers-Reading-Roadmap

                                                                                            • sshUrl

                                                                                              git@github.com:floodsung/Deep-Learning-Papers-Reading-Roadmap.git

                                                                                            • Share this Page

                                                                                              share link

                                                                                              Consider Popular Machine Learning Libraries

                                                                                              tensorflow

                                                                                              by tensorflow

                                                                                              youtube-dl

                                                                                              by ytdl-org

                                                                                              models

                                                                                              by tensorflow

                                                                                              pytorch

                                                                                              by pytorch

                                                                                              keras

                                                                                              by keras-team

                                                                                              Try Top Libraries by floodsung

                                                                                              LearningToCompare_FSL

                                                                                              by floodsungPython

                                                                                              DRL-FlappyBird

                                                                                              by floodsungPython

                                                                                              DDPG

                                                                                              by floodsungPython

                                                                                              wechat_jump_end_to_end

                                                                                              by floodsungPython

                                                                                              DQN-Atari-Tensorflow

                                                                                              by floodsungPython

                                                                                              Compare Machine Learning Libraries with Highest Support

                                                                                              youtube-dl

                                                                                              by ytdl-org

                                                                                              scikit-learn

                                                                                              by scikit-learn

                                                                                              models

                                                                                              by tensorflow

                                                                                              tensorflow

                                                                                              by tensorflow

                                                                                              keras

                                                                                              by keras-team

                                                                                              Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from over 650 million Knowledge Items
                                                                                              Find more libraries
                                                                                              Explore Kits - Develop, implement, customize Projects, Custom Functions and Applications with kandi kits​
                                                                                              Save this library and start creating your kit