TDS Archive

An archive of data science, data analytics, data engineering, machine learning, and artificial…

Member-only story

Addressing Overfitting

How to Mitigate Overfitting with Regularization

Addressing the problem of overfitting — Part 2

Rukshan Pramoditha
TDS Archive
Published in
4 min readSep 24, 2021

--

Today, we’re continuing from Part 1 of the “Addressing the problem of overfitting” article series. Regularization is another useful technique that can be used to mitigate overfitting in machine learning models. Today, more emphasis will be given to discuss the intuition behind regularization instead of discussing its mathematical formulation. In this way, you can get a clear idea of the effect of applying regularization to machine learning models.

In general, the term “regularization” means limiting/controlling. In the context of machine learning, regularization deals with model complexity. It limits the model complexity or limits the learning process of the model during the training phase. Generally, we prefer simple and accurate models because complex models are more likely to be overfitting. By limiting the model complexity, overfitting tries to keep the models as simple as possible while these models still make accurate predictions.

There are two ways to apply regularization to machine learning models:

  • By adding another term to the loss function that we’re trying to minimize. Now, the objective function consists of two parts: loss function and…

--

--

TDS Archive
TDS Archive

Published in TDS Archive

An archive of data science, data analytics, data engineering, machine learning, and artificial intelligence writing from the former Towards Data Science Medium publication.

Rukshan Pramoditha
Rukshan Pramoditha

Written by Rukshan Pramoditha

3,000,000+ Views | BSc in Stats (University of Colombo, Sri Lanka) | Top 50 Data Science, AI/ML Technical Writer on Medium

No responses yet