Illustrated Guide to the 10 Most Common Machine Learning Algorithms

Illustrated Guide to the 10 Most Common Machine Learning Algorithms

In the field of machine learning, there is a saying that “there is no free lunch in the world,” which simply means that no single algorithm can perform best on every problem. This theory is particularly important in supervised learning.
For example, you cannot say that neural networks are always better than decision trees, or vice versa. The performance of a model is influenced by many factors, such as the size and structure of the dataset.
Therefore, you should try many different algorithms based on your problem, while using a data test set to evaluate performance and select the optimal one.
Of course, the algorithms you try must be relevant to your problem. This is the main task of machine learning. For instance, if you want to clean your house, you might use a vacuum cleaner, broom, or mop, but you certainly wouldn’t start digging a hole with a shovel.
For newcomers eager to understand the basics of machine learning, here’s a list of the top ten machine learning algorithms used by data scientists, introducing the characteristics of these algorithms to help everyone better understand and apply them. Let’s take a look!
01 Linear Regression
Linear regression is perhaps one of the most well-known and easiest to understand algorithms in statistics and machine learning.
Since predictive modeling primarily focuses on minimizing the error of the model, or making the most accurate predictions at the cost of interpretability, we borrow, reuse, and steal algorithms from many different fields, which involves some statistical knowledge.
Linear regression is represented by an equation that describes the linear relationship between input variables (x) and output variables (y) by finding specific weights (B) for the input variables.

Illustrated Guide to the 10 Most Common Machine Learning Algorithms

Linear Regression
For example: y = B0 + B1 * x
Given input x, we will predict y. The goal of the linear regression learning algorithm is to find the values of coefficients B0 and B1.
Different techniques can be used to learn the linear regression model from data, such as linear algebra solutions for ordinary least squares and gradient descent optimization.
Linear regression has been around for over 200 years and has been extensively studied. When possible, some rules of thumb when using this technique include removing very similar (correlated) variables and removing noise from the data. This is a quick and simple technique and a good first algorithm.
02 Logistic Regression
Logistic regression is another technique borrowed from statistics in machine learning. It is a specialized method for binary classification problems (problems with two class values).
Logistic regression is similar to linear regression because both aim to find the weight values for each input variable. Unlike linear regression, however, the predicted output values are transformed using a nonlinear function called the logistic function.
The logistic function looks like a big S and can convert any value into a range between 0 and 1. This is useful because we can apply corresponding rules to the output of the logistic function to classify values as 0 and 1 (for example, if IF less than 0.5, then output 1) and predict class values.

Illustrated Guide to the 10 Most Common Machine Learning Algorithms

Logistic Regression
Due to the unique learning method of the model, predictions made with logistic regression can also be used to calculate the probabilities of belonging to class 0 or class 1. This is very useful for problems that require providing many basic principles.
Like linear regression, logistic regression indeed performs better when you remove attributes that are unrelated to the output variable and those that are very similar (correlated) to each other. This is a quick learning model that effectively handles binary classification problems.
03 Linear Discriminant Analysis
Traditional logistic regression is limited to binary classification problems. If you have more than two classes, then Linear Discriminant Analysis (LDA) is the preferred linear classification technique.
LDA is represented very simply. It consists of the statistical properties of your data, calculated according to each class. For a single input variable, this includes:

Illustrated Guide to the 10 Most Common Machine Learning Algorithms

Linear Discriminant Analysis
LDA performs by calculating the discriminant values for each class and predicting the class with the maximum value. This technique assumes that the data has a Gaussian distribution (bell curve), so it’s best to manually remove outliers from the data first. This is a simple yet powerful method for classification prediction modeling problems.
04 Classification and Regression Trees
Decision trees are an important algorithm in machine learning.
The decision tree model can be represented as a binary tree. Yes, that’s right, a binary tree from algorithms and data structures, nothing special. Each node represents a single input variable (x) and the left and right children on that variable (assuming the variable is numeric).

Illustrated Guide to the 10 Most Common Machine Learning Algorithms

Decision Tree
The leaf nodes of the tree contain the output variable (y) used for making predictions. Predictions are made by traversing the tree, stopping when a leaf node is reached, and outputting the class value of that leaf node.
Decision trees are fast to learn and quick to predict. They often predict accurately for many problems, and you don’t need to do any special preparation for the data.
05 Naive Bayes
Naive Bayes is a simple yet extremely powerful predictive modeling algorithm.
The model consists of two types of probabilities that can be directly calculated from your training data: 1) the probability of each class; 2) the conditional probability of each x value given the class. Once calculated, the probability model can be used to make predictions on new data using Bayes’ theorem. When your data is numerical, it is usually assumed to be Gaussian distributed (bell curve) so that these probabilities can be easily estimated.

Illustrated Guide to the 10 Most Common Machine Learning Algorithms

Bayes Theorem
Naive Bayes is called “naive” because it assumes that each input variable is independent. This is a strong assumption and is unrealistic for real data, but the technique is still very effective for a wide range of complex problems.
06 K-Nearest Neighbors
The KNN algorithm is very simple and very effective. The KNN model represents the entire training dataset. Isn’t that simple?
To predict a new data point, it searches the entire training set for the K most similar instances (neighbors) and aggregates the output variables of these K instances. For regression problems, the new point might be the average output variable, while for classification problems, it might be the mode class value.
The key to success lies in how to determine the similarity between data instances. If your attributes are all on the same scale, the simplest way is to use Euclidean distance, which can be directly calculated based on the distance between each input variable.

Illustrated Guide to the 10 Most Common Machine Learning Algorithms

K-Nearest Neighbors
KNN may require a lot of memory or space to store all the data, but calculations (or learning) are only performed when predictions are needed. You can also update and manage your training set at any time to maintain prediction accuracy.
The concept of distance or closeness may break down in high-dimensional environments (many input variables), which can negatively impact the algorithm. Such events are referred to as the curse of dimensionality. This also implies that you should only use those input variables that are most relevant to the predicted output variable.
07 Learning Vector Quantization
The disadvantage of K-Nearest Neighbors is that you need to maintain the entire training dataset. Learning Vector Quantization (LVQ) is an artificial neural network algorithm that allows you to suspend any number of training instances and learn them accurately.

Illustrated Guide to the 10 Most Common Machine Learning Algorithms

Learning Vector Quantization
LVQ is represented by a set of codebook vectors. Initially, vectors are randomly selected and then iteratively adapted to the training dataset. After learning, the codebook vectors can be used to predict just like K-Nearest Neighbors. The most similar neighbor (best match) is found by calculating the distance between each codebook vector and the new data instance, and then returning the class value of the best matching unit or the actual value in regression cases as the prediction. You can achieve the best results if you keep the data within the same range (e.g., between 0 and 1).
If you find that KNN gives good results on your dataset, try using LVQ to reduce the memory requirements of storing the entire training dataset.
08 Support Vector Machine
Support Vector Machines (SVM) may be one of the most popular and discussed machine learning algorithms.
A hyperplane is a line that separates the input variable space. In SVM, a hyperplane is selected to separate points in the input variable space by their classes (class 0 or class 1). In two-dimensional space, it can be viewed as a line that can completely separate all input points. The SVM learning algorithm aims to find the coefficients that allow the hyperplane to best separate the classes.

Illustrated Guide to the 10 Most Common Machine Learning Algorithms

Support Vector Machine
The distance between the hyperplane and the nearest data points is called the margin, and the hyperplane with the maximum margin is the best choice. At the same time, only those data points that are close to the hyperplane are relevant to its definition and the construction of the classifier; these points are called support vectors, which support or define the hyperplane. In practice, we use optimization algorithms to find the coefficient values that maximize the margin.
SVM may be one of the most powerful off-the-shelf classifiers, worth trying on your dataset.
09 Bagging and Random Forest
Random Forest is one of the most popular and powerful machine learning algorithms. It is an ensemble machine learning algorithm known as Bootstrap Aggregation or Bagging.
Bootstrap is a powerful statistical method used to estimate a quantity from data samples, such as a mean. It draws a large number of sample data, computes the mean, and then averages all the means to more accurately estimate the true mean.
In bagging, the same method is used, but it is most commonly applied to decision trees rather than estimating an entire statistical model. It trains data through multiple sampling and builds a model for each data sample. When you need to predict new data, each model makes predictions, and the predictions are averaged to better estimate the true output value.

Illustrated Guide to the 10 Most Common Machine Learning Algorithms

Random Forest
Random Forest is an adjustment of decision trees. Instead of choosing the best split point, Random Forest introduces randomness to achieve suboptimal splits.
As a result, the variability between the models created for each data sample will be greater, but they remain accurate in their own right. Combining the prediction results can better estimate the correct potential output values.
If you get good results with high-variance algorithms (like decision trees), adding this algorithm will yield even better results.
10 Boosting and AdaBoost
Boosting is an ensemble technique that creates a strong classifier from several weak classifiers. It first builds a model from the training data and then creates a second model to try to correct the errors of the first model. Models are continuously added until the training set is perfectly predicted or the maximum number has been reached.
AdaBoost is the first truly successful boosting algorithm developed for binary classification, and it is also the best starting point for understanding boosting. The most famous algorithms built on AdaBoost are currently random gradient boosting.

Illustrated Guide to the 10 Most Common Machine Learning Algorithms

AdaBoost
AdaBoost is often used with short decision trees. After creating the first tree, the performance of each training instance on the tree determines how much attention the next tree needs to pay to that training instance. Difficult-to-predict training data is given more weight, while easy-to-predict instances are given less weight. Models are created sequentially, and each model’s updates affect the learning effectiveness of the next tree in the sequence. After all the trees are built, the algorithm predicts new data and weights the performance of each tree based on the accuracy of the training data.
Because the algorithm places great emphasis on error correction, it is crucial to have clean data without outliers.
A typical question that beginners face when confronted with various machine learning algorithms is, “Which algorithm should I use?” The answer to this question depends on many factors, including:
Even an experienced data scientist cannot know which algorithm will perform best without trying different ones. While there are many other machine learning algorithms, these are the most popular ones. If you are new to machine learning, this is a great starting point.
—THE END—
Editor ∑Gemini
Source: https://medium.com/
Recommended Articles
? Why is a Circle 360 Degrees?
? Bayes Everywhere in Life
? Decoding the ln( ) Function
? Understanding Matrix Multiplication
? Can’t Understand Five-Dimensional Space? Look Here to Understand, Plus Analysis of Higher-Dimensional Spaces
? Touching: 40-Year-Old PhD Graduate, New Anhui Provincial Governor Wang Qingxian’s PhD Thesis Afterword

This article is reproduced from NetEase’s account [The Beauty of Algorithms and Mathematics], for more content, please click “Read Original”

Illustrated Guide to the 10 Most Common Machine Learning Algorithms

Leave a Comment