Member-only story
Addressing Overfitting
How to Mitigate Overfitting with Regularization
Addressing the problem of overfitting — Part 2
Today, we’re continuing from Part 1 of the “Addressing the problem of overfitting” article series. Regularization is another useful technique that can be used to mitigate overfitting in machine learning models. Today, more emphasis will be given to discuss the intuition behind regularization instead of discussing its mathematical formulation. In this way, you can get a clear idea of the effect of applying regularization to machine learning models.
In general, the term “regularization” means limiting/controlling. In the context of machine learning, regularization deals with model complexity. It limits the model complexity or limits the learning process of the model during the training phase. Generally, we prefer simple and accurate models because complex models are more likely to be overfitting. By limiting the model complexity, overfitting tries to keep the models as simple as possible while these models still make accurate predictions.
There are two ways to apply regularization to machine learning models:
- By adding another term to the loss function that we’re trying to minimize. Now, the objective function consists of two parts: loss function and…