kandi background
Explore Kits

purdue-fastr | FastR implements the R Language | Machine Learning library

 by   allr Java Version: Current License: Non-SPDX

 by   allr Java Version: Current License: Non-SPDX

Download this library from

kandi X-RAY | purdue-fastr Summary

purdue-fastr is a Java library typically used in Artificial Intelligence, Machine Learning, Deep Learning applications. purdue-fastr has no bugs, it has no vulnerabilities, it has build file available and it has low support. However purdue-fastr has a Non-SPDX License. You can download it from GitHub.
FastR implements the R Language. Currently, FastR can run the R implementation of the Language Shootout Benchmarks and the Benchmark 25 suite.
Support
Support
Quality
Quality
Security
Security
License
License
Reuse
Reuse

kandi-support Support

  • purdue-fastr has a low active ecosystem.
  • It has 270 star(s) with 39 fork(s). There are 34 watchers for this library.
  • It had no major release in the last 12 months.
  • There are 2 open issues and 2 have been closed. On average issues are closed in 3 days. There are no pull requests.
  • It has a neutral sentiment in the developer community.
  • The latest version of purdue-fastr is current.
purdue-fastr Support
Best in #Machine Learning
Average in #Machine Learning
purdue-fastr Support
Best in #Machine Learning
Average in #Machine Learning

quality kandi Quality

  • purdue-fastr has 0 bugs and 0 code smells.
purdue-fastr Quality
Best in #Machine Learning
Average in #Machine Learning
purdue-fastr Quality
Best in #Machine Learning
Average in #Machine Learning

securitySecurity

  • purdue-fastr has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
  • purdue-fastr code analysis shows 0 unresolved vulnerabilities.
  • There are 0 security hotspots that need review.
purdue-fastr Security
Best in #Machine Learning
Average in #Machine Learning
purdue-fastr Security
Best in #Machine Learning
Average in #Machine Learning

license License

  • purdue-fastr has a Non-SPDX License.
  • Non-SPDX licenses can be open source with a non SPDX compliant license, or non open source licenses, and you need to review them closely before use.
purdue-fastr License
Best in #Machine Learning
Average in #Machine Learning
purdue-fastr License
Best in #Machine Learning
Average in #Machine Learning

buildReuse

  • purdue-fastr releases are not available. You will need to build from source code and install.
  • Build file is available. You can build the component from source.
  • Installation instructions are available. Examples and code snippets are not available.
purdue-fastr Reuse
Best in #Machine Learning
Average in #Machine Learning
purdue-fastr Reuse
Best in #Machine Learning
Average in #Machine Learning
Top functions reviewed by kandi - BETA

kandi has reviewed purdue-fastr and discovered the below as its top functions. This is intended to give you an instant insight into purdue-fastr implemented functionality, and help decide if they suit your requirements.

  • finds the tokens of the expression
    • Sprintf a number of arguments .
      • 11 . 4
        • Combines parameter names .
          • Compute a coefine of a given ROCNode .
            • Initialize primitive types .
              • Assigns dots arguments to a single frame .
                • Convert an array of arguments to an RAny object .
                  • Computes and returns the position of the argument expressions .
                    • Initializes the arguments for the argument .

                      Get all kandi verified functions for this library.

                      Get all kandi verified functions for this library.

                      purdue-fastr Key Features

                      Community Discussions

                      Trending Discussions on Machine Learning
                      • Using RNN Trained Model without pytorch installed
                      • Flux.jl : Customizing optimizer
                      • How can I check a confusion_matrix after fine-tuning with custom datasets?
                      • CUDA OOM - But the numbers don't add upp?
                      • How to compare baseline and GridSearchCV results fair?
                      • Getting Error 524 while running jupyter lab in google cloud platform
                      • TypeError: brain.NeuralNetwork is not a constructor
                      • Ordinal Encoding or One-Hot-Encoding
                      • How to increase dimension-vector size of BERT sentence-transformers embedding
                      • How to identify what features affect predictions result?
                      Trending Discussions on Machine Learning

                      QUESTION

                      Using RNN Trained Model without pytorch installed

                      Asked 2022-Feb-28 at 20:17

                      I have trained an RNN model with pytorch. I need to use the model for prediction in an environment where I'm unable to install pytorch because of some strange dependency issue with glibc. However, I can install numpy and scipy and other libraries. So, I want to use the trained model, with the network definition, without pytorch.

                      I have the weights of the model as I save the model with its state dict and weights in the standard way, but I can also save it using just json/pickle files or similar.

                      I also have the network definition, which depends on pytorch in a number of ways. This is my RNN network definition.

                      import torch
                      import torch.nn as nn
                      import torch.nn.functional as F
                      import torch.optim as optim
                      import random
                      
                      torch.manual_seed(1)
                      random.seed(1)
                      device = torch.device('cpu')
                      
                      class RNN(nn.Module):
                        def __init__(self, input_size, hidden_size, output_size,num_layers, matching_in_out=False, batch_size=1):
                          super(RNN, self).__init__()
                          self.input_size = input_size
                          self.hidden_size = hidden_size
                          self.output_size = output_size
                          self.num_layers = num_layers
                          self.batch_size = batch_size
                          self.matching_in_out = matching_in_out #length of input vector matches the length of output vector 
                          self.lstm = nn.LSTM(input_size, hidden_size,num_layers)
                          self.hidden2out = nn.Linear(hidden_size, output_size)
                          self.hidden = self.init_hidden()
                        def forward(self, feature_list):
                          feature_list=torch.tensor(feature_list)
                          
                          if self.matching_in_out:
                            lstm_out, _ = self.lstm( feature_list.view(len( feature_list), 1, -1))
                            output_space = self.hidden2out(lstm_out.view(len( feature_list), -1))
                            output_scores = torch.sigmoid(output_space) #we'll need to check if we need this sigmoid
                            return output_scores #output_scores
                          else:
                            for i in range(len(feature_list)):
                              cur_ft_tensor=feature_list[i]#.view([1,1,self.input_size])
                              cur_ft_tensor=cur_ft_tensor.view([1,1,self.input_size])
                              lstm_out, self.hidden = self.lstm(cur_ft_tensor, self.hidden)
                              outs=self.hidden2out(lstm_out)
                            return outs
                        def init_hidden(self):
                          #return torch.rand(self.num_layers, self.batch_size, self.hidden_size)
                          return (torch.rand(self.num_layers, self.batch_size, self.hidden_size).to(device),
                                  torch.rand(self.num_layers, self.batch_size, self.hidden_size).to(device))
                      

                      I am aware of this question, but I'm willing to go as low level as possible. I can work with numpy array instead of tensors, and reshape instead of view, and I don't need a device setting.

                      Based on the class definition above, what I can see here is that I only need the following components from torch to get an output from the forward function:

                      • nn.LSTM
                      • nn.Linear
                      • torch.sigmoid

                      I think I can easily implement the sigmoid function using numpy. However, can I have some implementation for the nn.LSTM and nn.Linear using something not involving pytorch? Also, how will I use the weights from the state dict into the new class?

                      So, the question is, how can I "translate" this RNN definition into a class that doesn't need pytorch, and how to use the state dict weights for it? Alternatively, is there a "light" version of pytorch, that I can use just to run the model and yield a result?

                      EDIT

                      I think it might be useful to include the numpy/scipy equivalent for both nn.LSTM and nn.linear. It would help us compare the numpy output to torch output for the same code, and give us some modular code/functions to use. Specifically, a numpy equivalent for the following would be great:

                      rnn = nn.LSTM(10, 20, 2)
                      input = torch.randn(5, 3, 10)
                      h0 = torch.randn(2, 3, 20)
                      c0 = torch.randn(2, 3, 20)
                      output, (hn, cn) = rnn(input, (h0, c0))
                      

                      and also for linear:

                      m = nn.Linear(20, 30)
                      input = torch.randn(128, 20)
                      output = m(input)
                      

                      ANSWER

                      Answered 2022-Feb-17 at 10:47

                      You should try to export the model using torch.onnx. The page gives you an example that you can start with.

                      An alternative is to use TorchScript, but that requires torch libraries.

                      Both of these can be run without python. You can load torchscript in a C++ application https://pytorch.org/tutorials/advanced/cpp_export.html

                      ONNX is much more portable and you can use in languages such as C#, Java, or Javascript https://onnxruntime.ai/ (even on the browser)

                      A running example

                      Just modifying a little your example to go over the errors I found

                      Notice that via tracing any if/elif/else, for, while will be unrolled

                      import torch
                      import torch.nn as nn
                      import torch.nn.functional as F
                      import torch.optim as optim
                      import random
                      
                      torch.manual_seed(1)
                      random.seed(1)
                      device = torch.device('cpu')
                      
                      class RNN(nn.Module):
                        def __init__(self, input_size, hidden_size, output_size,num_layers, matching_in_out=False, batch_size=1):
                          super(RNN, self).__init__()
                          self.input_size = input_size
                          self.hidden_size = hidden_size
                          self.output_size = output_size
                          self.num_layers = num_layers
                          self.batch_size = batch_size
                          self.matching_in_out = matching_in_out #length of input vector matches the length of output vector 
                          self.lstm = nn.LSTM(input_size, hidden_size,num_layers)
                          self.hidden2out = nn.Linear(hidden_size, output_size)
                        def forward(self, x, h0, c0):
                          lstm_out, (hidden_a, hidden_b) = self.lstm(x, (h0, c0))
                          outs=self.hidden2out(lstm_out)
                          return outs, (hidden_a, hidden_b)
                        def init_hidden(self):
                          #return torch.rand(self.num_layers, self.batch_size, self.hidden_size)
                          return (torch.rand(self.num_layers, self.batch_size, self.hidden_size).to(device).detach(),
                                  torch.rand(self.num_layers, self.batch_size, self.hidden_size).to(device).detach())
                      
                      # convert the arguments passed during onnx.export call
                      class MWrapper(nn.Module):
                          def __init__(self, model):
                              super(MWrapper, self).__init__()
                              self.model = model;
                          def forward(self, kwargs):
                              return self.model(**kwargs)
                      

                      Run an example

                      rnn = RNN(10, 10, 10, 3)
                      X = torch.randn(3,1,10)
                      h0,c0  = rnn.init_hidden()
                      print(rnn(X, h0, c0)[0])
                      

                      Use the same input to trace the model and export an onnx file

                      
                      torch.onnx.export(MWrapper(rnn), {'x':X,'h0':h0,'c0':c0}, 'rnn.onnx', 
                                        dynamic_axes={'x':{1:'N'},
                                                     'c0':{1: 'N'},
                                                     'h0':{1: 'N'}
                                                     },
                                        input_names=['x', 'h0', 'c0'],
                                        output_names=['y', 'hn', 'cn']
                                       )
                      

                      Notice that you can use symbolic values for the dimensions of some axes of some inputs. Unspecified dimensions will be fixed with the values from the traced inputs. By default LSTM uses dimension 1 as batch.

                      Next we load the ONNX model and pass the same inputs

                      import onnxruntime
                      ort_model = onnxruntime.InferenceSession('rnn.onnx')
                      print(ort_model.run(['y'], {'x':X.numpy(), 'c0':c0.numpy(), 'h0':h0.numpy()}))
                      

                      Source https://stackoverflow.com/questions/71146140

                      Community Discussions, Code Snippets contain sources that include Stack Exchange Network

                      Vulnerabilities

                      No vulnerabilities reported

                      Install purdue-fastr

                      download the latest code: wget https://github.com/allr/fastr/archive/master.zip
                      unzip it: unzip master.zip
                      build: cd fastr-master ; ant
                      run the console: ./r.sh
                      run the binarytrees benchmark for size 5: ./r.sh --args 5 -f test/r/shootout/binarytrees/binarytrees.r
                      To run the benchmarks from the Benchmark 25 suite, and for best performance of all benchmarks, build native glue code which links FastR to the GNU-R Math Library, system Math library, and openBLAS. The build scripts are tested on Ubuntu 13.10. Any platform supported by GNU-R and Java could be supported by FastR. To ensure that the openBLAS library is used, run the matcal-4 benchmark with the system profiler: perf record ./nr.sh -f test/r/benchmark25/perfres/b25-matcal-4.r. Check with perf report that DGEMM from openBLAS is used, e.g. dgemm_kernel_SANDYBRIDGE from libopenblas.so.0. Also expect to see the random number generator, e.g. qnorm5 from libRmath.so.1.0.0.
                      install Oracle JDK8 (for best performance); if you must use JDK7, customize native/netlib-java/build.sh
                      set JAVA_HOME and PATH accordingly
                      follow the steps in Quick Start
                      install Ubuntu packages r-base, r-mathlib, libopenblas-base
                      build glue code for system libraries and GNU-R: cd native ; ./build.sh
                      build glue code for native BLAS and LAPACK: cd netlib-java ; ./build.sh
                      check the glue code can be loaded: cd ../.. ; ./nr.sh should give output Using LAPACK: org.netlib.lapack.NativeLAPACK Using BLAS: org.netlib.blas.NativeBLAS Using GNUR: yes Using System libraries (C/M): yes Using MKL: not available
                      run the matfunc-1 benchmark: ./nr.sh -f test/r/benchmark25/perfres/b25-matfunc-1.r

                      Support

                      For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .

                      DOWNLOAD this Library from

                      Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
                      over 430 million Knowledge Items
                      Find more libraries
                      Reuse Solution Kits and Libraries Curated by Popular Use Cases
                      Explore Kits

                      Save this library and start creating your kit

                      Share this Page

                      share link
                      Consider Popular Machine Learning Libraries
                      Try Top Libraries by allr
                      Compare Machine Learning Libraries with Highest Support
                      Compare Machine Learning Libraries with Highest Quality
                      Compare Machine Learning Libraries with Highest Security
                      Compare Machine Learning Libraries with Permissive License
                      Compare Machine Learning Libraries with Highest Reuse
                      Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
                      over 430 million Knowledge Items
                      Find more libraries
                      Reuse Solution Kits and Libraries Curated by Popular Use Cases
                      Explore Kits

                      Save this library and start creating your kit

                      • © 2022 Open Weaver Inc.