kandi background
Explore Kits

dANN-core | Artificial Intelligence and Artificial Genetics library | Machine Learning library

 by   Syncleus Java Version: Current License: No License

 by   Syncleus Java Version: Current License: No License

Download this library from

kandi X-RAY | dANN-core Summary

dANN-core is a Java library typically used in Artificial Intelligence, Machine Learning applications. dANN-core has no bugs, it has no vulnerabilities, it has build file available and it has low support. You can download it from GitHub.
dANN is an Artificial Intelligence and Artificial Genetics library targeted at employing conventional algorithms as well as acting as a platform for research & development of novel algorithms. As new algorithms are developed and proven to be effective they will be integrated into the core library. It is currently written in Java and is being actively developed by a small team. Our intentions are two fold. First, to provide a powerful interface for programs to include conventional machine learning technology into their code. Second, To act as a testing ground for research and development of new AI concepts. We provide new AI technology we have developed, and the latest algorithms already on the market. In the spirit of modular programming the library also provides access to the primitive components giving you greater control over implementing your own unique algorithms. You can either let our library do all the work, or you can override any step along the way. dANN 2.x was completely rewritten for dANN 3.x as such the latest version of dANN will vary significantly from this version. For more information check out the main dANN site.
Support
Support
Quality
Quality
Security
Security
License
License
Reuse
Reuse

kandi-support Support

  • dANN-core has a low active ecosystem.
  • It has 11 star(s) with 3 fork(s). There are 2 watchers for this library.
  • It had no major release in the last 12 months.
  • dANN-core has no issues reported. There are no pull requests.
  • It has a neutral sentiment in the developer community.
  • The latest version of dANN-core is current.
dANN-core Support
Best in #Machine Learning
Average in #Machine Learning
dANN-core Support
Best in #Machine Learning
Average in #Machine Learning

quality kandi Quality

  • dANN-core has 0 bugs and 0 code smells.
dANN-core Quality
Best in #Machine Learning
Average in #Machine Learning
dANN-core Quality
Best in #Machine Learning
Average in #Machine Learning

securitySecurity

  • dANN-core has no vulnerabilities reported, and its dependent libraries have no vulnerabilities reported.
  • dANN-core code analysis shows 0 unresolved vulnerabilities.
  • There are 0 security hotspots that need review.
dANN-core Security
Best in #Machine Learning
Average in #Machine Learning
dANN-core Security
Best in #Machine Learning
Average in #Machine Learning

license License

  • dANN-core does not have a standard license declared.
  • Check the repository for any license declaration and review the terms closely.
  • Without a license, all rights are reserved, and you cannot use the library in your applications.
dANN-core License
Best in #Machine Learning
Average in #Machine Learning
dANN-core License
Best in #Machine Learning
Average in #Machine Learning

buildReuse

  • dANN-core releases are not available. You will need to build from source code and install.
  • Build file is available. You can build the component from source.
  • Installation instructions, examples and code snippets are available.
dANN-core Reuse
Best in #Machine Learning
Average in #Machine Learning
dANN-core Reuse
Best in #Machine Learning
Average in #Machine Learning
Top functions reviewed by kandi - BETA

kandi has reviewed dANN-core and discovered the below as its top functions. This is intended to give you an instant insight into dANN-core implemented functionality, and help decide if they suit your requirements.

  • Returns the current positions of the molecule .
    • Increments the state by the given state .
      • Computes the rank of the graph .
        • Aligns the coordinates .
          • Traverses a single edge .
            • Computes the determinant .
              • Computes the conditional probability for this network
                • Converts a set of Strings to a list of Strings .
                  • Generate the next random generation .
                    • Calculates the geometric mean of the complex numbers .

                      Get all kandi verified functions for this library.

                      Get all kandi verified functions for this library.

                      dANN-core Key Features

                      Graph Theory Search Path Finding A* Dijkstra Bellman-Ford Johnson's Floyd-Warshall Optimization Hill Climbing Local Search Graph Drawing Hyperassociative Map 3D Hyperassociative Map Visualization Cycle Detection Colored Depth-first Search Exhaustive Depth First Search Minimal Spanning Tree Detection (MST) Kruskal Prim Topological Sort Algorithm

                      Evolutionary Algorithms Genetic Algorithms Genetic Wavelets

                      Naive Classifier Naive Bayes Classifier Naive Fisher Classifier

                      Data Processing Signal Processing Cooley Tukey Fast Fourier Transform Language Processing Word Parsing Word Stemming Porter Stemming Algorithm Data Interrelational Graph

                      Graphical Models Markov Random Fields Dynamic Markov Random Field Bayesian Networks Dynamic Bayesian Networks Dynamic Graphical Models Hidden Markov Models Baum–Welch Algorithm Layered Hidden Markov Models Hierarchical Hidden Markov Models

                      Artificial Neural Networks Activation Function Collection Backpropagation Networks Feedforward Networks Self Organizing Maps Realtime Neural Networks Spiking Neural Networks Izhikevich Algorithm 3D Network Visualization

                      Mathematics Statistics Markov Chains Markov Chain Monte Carlo (Parameter Estimation) Counting Combinations Permutations Lexicographic Johnson-Trotter Algorithm Complex Numbers N-Dimensional Vectors Greatest Common Denominator Binary Algorithm Euclidean Algorithm Extended Euclidean Algorithm Linear Algebra Cholesky Decomposition Hessenberg Decomposition Eigenvalue Decomposition LU Decomposition QR Decomposition Singular Value Decomposition

                      dANN-core Examples and Code Snippets

                      See all related Code Snippets

                      Maven Dependency

                      copy iconCopydownload iconDownload
                      <dependency>
                          <groupId>com.syncleus.dann</groupId>
                          <artifactId>dann-core</artifactId>
                          <version>2.1</version>
                      </dependency>
                      

                      Getting Started

                      copy iconCopydownload iconDownload
                      TrainableLanguageNaiveClassifier<Integer> classifier =
                           new SimpleLanguageNaiveClassifier<Integer>();
                      

                      Obtaining the Source

                      copy iconCopydownload iconDownload
                      git clone http://gerrit.syncleus.com/dANN-core
                      

                      See all related Code Snippets

                      Community Discussions

                      Trending Discussions on Machine Learning
                      • Using RNN Trained Model without pytorch installed
                      • Flux.jl : Customizing optimizer
                      • How can I check a confusion_matrix after fine-tuning with custom datasets?
                      • CUDA OOM - But the numbers don't add upp?
                      • How to compare baseline and GridSearchCV results fair?
                      • Getting Error 524 while running jupyter lab in google cloud platform
                      • TypeError: brain.NeuralNetwork is not a constructor
                      • Ordinal Encoding or One-Hot-Encoding
                      • How to increase dimension-vector size of BERT sentence-transformers embedding
                      • How to identify what features affect predictions result?
                      Trending Discussions on Machine Learning

                      QUESTION

                      Using RNN Trained Model without pytorch installed

                      Asked 2022-Feb-28 at 20:17

                      I have trained an RNN model with pytorch. I need to use the model for prediction in an environment where I'm unable to install pytorch because of some strange dependency issue with glibc. However, I can install numpy and scipy and other libraries. So, I want to use the trained model, with the network definition, without pytorch.

                      I have the weights of the model as I save the model with its state dict and weights in the standard way, but I can also save it using just json/pickle files or similar.

                      I also have the network definition, which depends on pytorch in a number of ways. This is my RNN network definition.

                      import torch
                      import torch.nn as nn
                      import torch.nn.functional as F
                      import torch.optim as optim
                      import random
                      
                      torch.manual_seed(1)
                      random.seed(1)
                      device = torch.device('cpu')
                      
                      class RNN(nn.Module):
                        def __init__(self, input_size, hidden_size, output_size,num_layers, matching_in_out=False, batch_size=1):
                          super(RNN, self).__init__()
                          self.input_size = input_size
                          self.hidden_size = hidden_size
                          self.output_size = output_size
                          self.num_layers = num_layers
                          self.batch_size = batch_size
                          self.matching_in_out = matching_in_out #length of input vector matches the length of output vector 
                          self.lstm = nn.LSTM(input_size, hidden_size,num_layers)
                          self.hidden2out = nn.Linear(hidden_size, output_size)
                          self.hidden = self.init_hidden()
                        def forward(self, feature_list):
                          feature_list=torch.tensor(feature_list)
                          
                          if self.matching_in_out:
                            lstm_out, _ = self.lstm( feature_list.view(len( feature_list), 1, -1))
                            output_space = self.hidden2out(lstm_out.view(len( feature_list), -1))
                            output_scores = torch.sigmoid(output_space) #we'll need to check if we need this sigmoid
                            return output_scores #output_scores
                          else:
                            for i in range(len(feature_list)):
                              cur_ft_tensor=feature_list[i]#.view([1,1,self.input_size])
                              cur_ft_tensor=cur_ft_tensor.view([1,1,self.input_size])
                              lstm_out, self.hidden = self.lstm(cur_ft_tensor, self.hidden)
                              outs=self.hidden2out(lstm_out)
                            return outs
                        def init_hidden(self):
                          #return torch.rand(self.num_layers, self.batch_size, self.hidden_size)
                          return (torch.rand(self.num_layers, self.batch_size, self.hidden_size).to(device),
                                  torch.rand(self.num_layers, self.batch_size, self.hidden_size).to(device))
                      

                      I am aware of this question, but I'm willing to go as low level as possible. I can work with numpy array instead of tensors, and reshape instead of view, and I don't need a device setting.

                      Based on the class definition above, what I can see here is that I only need the following components from torch to get an output from the forward function:

                      • nn.LSTM
                      • nn.Linear
                      • torch.sigmoid

                      I think I can easily implement the sigmoid function using numpy. However, can I have some implementation for the nn.LSTM and nn.Linear using something not involving pytorch? Also, how will I use the weights from the state dict into the new class?

                      So, the question is, how can I "translate" this RNN definition into a class that doesn't need pytorch, and how to use the state dict weights for it? Alternatively, is there a "light" version of pytorch, that I can use just to run the model and yield a result?

                      EDIT

                      I think it might be useful to include the numpy/scipy equivalent for both nn.LSTM and nn.linear. It would help us compare the numpy output to torch output for the same code, and give us some modular code/functions to use. Specifically, a numpy equivalent for the following would be great:

                      rnn = nn.LSTM(10, 20, 2)
                      input = torch.randn(5, 3, 10)
                      h0 = torch.randn(2, 3, 20)
                      c0 = torch.randn(2, 3, 20)
                      output, (hn, cn) = rnn(input, (h0, c0))
                      

                      and also for linear:

                      m = nn.Linear(20, 30)
                      input = torch.randn(128, 20)
                      output = m(input)
                      

                      ANSWER

                      Answered 2022-Feb-17 at 10:47

                      You should try to export the model using torch.onnx. The page gives you an example that you can start with.

                      An alternative is to use TorchScript, but that requires torch libraries.

                      Both of these can be run without python. You can load torchscript in a C++ application https://pytorch.org/tutorials/advanced/cpp_export.html

                      ONNX is much more portable and you can use in languages such as C#, Java, or Javascript https://onnxruntime.ai/ (even on the browser)

                      A running example

                      Just modifying a little your example to go over the errors I found

                      Notice that via tracing any if/elif/else, for, while will be unrolled

                      import torch
                      import torch.nn as nn
                      import torch.nn.functional as F
                      import torch.optim as optim
                      import random
                      
                      torch.manual_seed(1)
                      random.seed(1)
                      device = torch.device('cpu')
                      
                      class RNN(nn.Module):
                        def __init__(self, input_size, hidden_size, output_size,num_layers, matching_in_out=False, batch_size=1):
                          super(RNN, self).__init__()
                          self.input_size = input_size
                          self.hidden_size = hidden_size
                          self.output_size = output_size
                          self.num_layers = num_layers
                          self.batch_size = batch_size
                          self.matching_in_out = matching_in_out #length of input vector matches the length of output vector 
                          self.lstm = nn.LSTM(input_size, hidden_size,num_layers)
                          self.hidden2out = nn.Linear(hidden_size, output_size)
                        def forward(self, x, h0, c0):
                          lstm_out, (hidden_a, hidden_b) = self.lstm(x, (h0, c0))
                          outs=self.hidden2out(lstm_out)
                          return outs, (hidden_a, hidden_b)
                        def init_hidden(self):
                          #return torch.rand(self.num_layers, self.batch_size, self.hidden_size)
                          return (torch.rand(self.num_layers, self.batch_size, self.hidden_size).to(device).detach(),
                                  torch.rand(self.num_layers, self.batch_size, self.hidden_size).to(device).detach())
                      
                      # convert the arguments passed during onnx.export call
                      class MWrapper(nn.Module):
                          def __init__(self, model):
                              super(MWrapper, self).__init__()
                              self.model = model;
                          def forward(self, kwargs):
                              return self.model(**kwargs)
                      

                      Run an example

                      rnn = RNN(10, 10, 10, 3)
                      X = torch.randn(3,1,10)
                      h0,c0  = rnn.init_hidden()
                      print(rnn(X, h0, c0)[0])
                      

                      Use the same input to trace the model and export an onnx file

                      
                      torch.onnx.export(MWrapper(rnn), {'x':X,'h0':h0,'c0':c0}, 'rnn.onnx', 
                                        dynamic_axes={'x':{1:'N'},
                                                     'c0':{1: 'N'},
                                                     'h0':{1: 'N'}
                                                     },
                                        input_names=['x', 'h0', 'c0'],
                                        output_names=['y', 'hn', 'cn']
                                       )
                      

                      Notice that you can use symbolic values for the dimensions of some axes of some inputs. Unspecified dimensions will be fixed with the values from the traced inputs. By default LSTM uses dimension 1 as batch.

                      Next we load the ONNX model and pass the same inputs

                      import onnxruntime
                      ort_model = onnxruntime.InferenceSession('rnn.onnx')
                      print(ort_model.run(['y'], {'x':X.numpy(), 'c0':c0.numpy(), 'h0':h0.numpy()}))
                      

                      Source https://stackoverflow.com/questions/71146140

                      Community Discussions, Code Snippets contain sources that include Stack Exchange Network

                      Vulnerabilities

                      No vulnerabilities reported

                      Install dANN-core

                      There are several excellent examples listed on the dANN main site. There are many thing's dANN can do so it would be impossible to come up with any singular example which demonstrates the full power of the dANN library. So instead we will focus on a simple naive classifier example. Naive classifiers are powerful, yet simple, tools used to classify data. They are the most common tool used in spam filters for example. The following example shows how to use a simple naive classifier, though it could be easily modified to work with dANN's bayes and fisher classifier implementations. The first step is to create a new classifier we can work with. This will create a new classifier where items are classified into categories represented by Integer types. Another classifier to consider using is StemmingLanguageNaiveClassifier. This classifier is used in the exact same way however it applies Porter Stemming Algorithm to each word. This will cause words like running and run to be seen as the same feature. If you want to use this classifier instead you could do the following. Next the real magic happens, we train the classifier. In this example there are only 2 categories we train for, 1 and 2. You'll notice we intentionally trained the classifier with several obvious patterns. Money appears most often in category 1, and nonsense and space prefers category 2. This will show up when we ask for some classifications in the next step. As you can see this simple class will classify the features (words) of a phrase into categories it has previously learned. Not only can it classify the features within an item but also items themselves.

                      Support

                      For any new features, suggestions and bugs create an issue on GitHub. If you have any questions check and ask questions on community page Stack Overflow .

                      DOWNLOAD this Library from

                      Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
                      over 430 million Knowledge Items
                      Find more libraries
                      Reuse Solution Kits and Libraries Curated by Popular Use Cases
                      Explore Kits

                      Save this library and start creating your kit

                      Share this Page

                      share link
                      Consider Popular Machine Learning Libraries
                      Try Top Libraries by Syncleus
                      Compare Machine Learning Libraries with Highest Support
                      Compare Machine Learning Libraries with Highest Quality
                      Compare Machine Learning Libraries with Highest Security
                      Compare Machine Learning Libraries with Permissive License
                      Compare Machine Learning Libraries with Highest Reuse
                      Find, review, and download reusable Libraries, Code Snippets, Cloud APIs from
                      over 430 million Knowledge Items
                      Find more libraries
                      Reuse Solution Kits and Libraries Curated by Popular Use Cases
                      Explore Kits

                      Save this library and start creating your kit

                      • © 2022 Open Weaver Inc.