SVM Substitute Algos
by akshara Updated: Jun 13, 2022
Solution Kit
Supervised learning uses a training set to teach models to yield the desired output. This training dataset includes inputs and correct outputs, which allow the model to learn over time. The algorithm measures its accuracy through the loss function, adjusting until the error has been sufficiently minimized. It is a process of providing input data as well as correct output data to the machine learning model. The aim of a supervised learning algorithm is to find a mapping function to map the input variable(x) with the output variable(y).
Support Vector Machine
Libraries that can be used for SVM are:
misvmby garydoranjr
Multiple-Instance Support Vector Machines
misvmby garydoranjr
Python
198
Version:Current
License: Permissive (BSD-3-Clause)
jlibsvmby davidsoergel
Efficient training of Support Vector Machines in Java
jlibsvmby davidsoergel
Java
121
Version:Current
License: Others (Non-SPDX)
cnn-svmby AFAgarap
An Architecture Combining Convolutional Neural Network (CNN) and Linear Support Vector Machine (SVM) for Image Classification
cnn-svmby AFAgarap
Python
284
Version:v0.1.0-alpha
License: Permissive (Apache-2.0)
Other Classification Techniques are :
Libraries are listed for each classification technique.
Naive Bayes
Naive Bayes is a probabilistic classifier, which means it predicts on the basis of the probability of an object. It is mainly used in text classification that includes a high-dimensional training dataset.
Java-Naive-Bayes-Classifierby ptnplanet
A java classifier based on the naive Bayes approach complete with Maven support and a runnable example.
Java-Naive-Bayes-Classifierby ptnplanet
Java
292
Version:1.0.7
License: No License
nbayesby oasic
A robust, full-featured Ruby implementation of Naive Bayes
nbayesby oasic
Ruby
147
Version:Current
License: Permissive (MIT)
K-Nearest Neighbor
K-NN algorithm assumes the similarity between the new data and available data and put the new data into the category that is most similar to the available categories. K-NN algorithm stores all the available data and classifies a new data point based on the similarity.
libnaboby ethz-asl
A fast K Nearest Neighbor library for low-dimensional spaces
libnaboby ethz-asl
C++
383
Version:1.0.7
License: Permissive (BSD-3-Clause)
spark-knnby saurfang
k-Nearest Neighbors algorithm on Spark
spark-knnby saurfang
Scala
208
Version:Current
License: Permissive (Apache-2.0)
Random Forest
Random Forest is a classifier that contains a number of decision trees on various subsets of the given dataset and takes the average to improve the predictive accuracy of that dataset.
thundergbmby Xtra-Computing
ThunderGBM: Fast GBDTs and Random Forests on GPUs
thundergbmby Xtra-Computing
C++
663
Version:0.3.2
License: Permissive (Apache-2.0)
Decision Tree
Decision Tree is a tree-structured classifier, where internal nodes represent the features of a dataset, branches represent the decision rules and each leaf node represents the outcome.
catboostby catboost
A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.
catboostby catboost
Python
7169
Version:v1.2
License: Permissive (Apache-2.0)
dtreevizby parrt
A python library for decision tree visualization and model interpretation.
dtreevizby parrt
Jupyter Notebook
2537
Version:2.2.1
License: Permissive (MIT)
CloudForestby ryanbressler
Ensembles of decision trees in go/golang.
CloudForestby ryanbressler
Go
724
Version:Current
License: Others (Non-SPDX)