Supervised learning uses a training set to teach models to yield the desired output. This training dataset includes inputs and correct outputs, which allow the model to learn over time. The algorithm measures its accuracy through the loss function, adjusting until the error has been sufficiently minimized. It is a process of providing input data as well as correct output data to the machine learning model. The aim of a supervised learning algorithm is to find a mapping function to map the input variable(x) with the output variable(y).
Support Vector Machine
Libraries that can be used for SVM are:
Python 184 Version:v0.1.0-alpha License: Permissive (Apache-2.0)
Other Classification Techniques are :
Libraries are listed for each classification technique.
Naive Bayes is a probabilistic classifier, which means it predicts on the basis of the probability of an object. It is mainly used in text classification that includes a high-dimensional training dataset.
Java 292 Version:1.0.7 License: No License
K-NN algorithm assumes the similarity between the new data and available data and put the new data into the category that is most similar to the available categories. K-NN algorithm stores all the available data and classifies a new data point based on the similarity.
Random Forest is a classifier that contains a number of decision trees on various subsets of the given dataset and takes the average to improve the predictive accuracy of that dataset.
Decision Tree is a tree-structured classifier, where internal nodes represent the features of a dataset, branches represent the decision rules and each leaf node represents the outcome.
C 6893 Version:1.1.1 License: Permissive (Apache-2.0)
Open Weaver – Develop Applications Faster with Open Source