Regularization#

Part 1#

Recap on Bias-Variance Trade-Off#

Bias - Variance Trade-Off#

\[MSE(\hat{y})=Bias^{2}+Variance+Noise\]

→ to minimize the cost we need to find a good balance between the Bias and Variance term of a model
→ we can influence bias and variance by changing the complexity of our model

Note: Noise is the irreducible error of a model. We cannot influence it.

Example: Underfitting vs. Overfitting#

  • We got data. But we don’t know the underlying Data Generating Process.

  • So we want to model it.

  • Do you see a pattern/trend?

../_images/c509a2c907aa24cffb14ad7c9646730dd9870586908e4f93a6c7ca9696f099dd.png

Example: Underfitting vs. Overfitting#

  • Which model seems best?

  • Which model seems to underfit the data?

  • Which model might overfit the data?

  • How to evaluate if a model is underfitting/overfitting?

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
Cell In[6], line 1
----> 1 plot_models([Lasso, Lasso, Lasso], polynomials=[True,True,False], alphas=(0, 1, 0), figsize=(10, 8), title="3 suggested models", legend=[ "training data","model 1", "model 2", "model 3"])

TypeError: plot_models() got an unexpected keyword argument 'figsize'

Example: Underfitting vs. Overfitting#

How to evaluate if a model is underfitting/overfitting?

  • we need a cost function

  • we need test data

  • we should do error analysis

../_images/9adc643c04394ab4e11f9e38da33c63bdcc65483230cfcf4c88215afc9c8c252.png

Example: Underfitting vs. Overfitting#

How to figure out if your model is overfitting?

../_images/9adc643c04394ab4e11f9e38da33c63bdcc65483230cfcf4c88215afc9c8c252.png

Example: Underfitting vs. Overfitting#

How to figure out if your model is overfitting?

  • error on training data is low

  • error on test data is high

→ model memorizes the noise in the data

../_images/9adc643c04394ab4e11f9e38da33c63bdcc65483230cfcf4c88215afc9c8c252.png

Example: Underfitting vs. Overfitting#

How to figure out if your model is overfitting?

  • error on training data is low

  • error on test data is high

../_images/9adc643c04394ab4e11f9e38da33c63bdcc65483230cfcf4c88215afc9c8c252.png

Part 2#

A Visual Approach#

Prevent Overfitting#

If we see overfitting of our model, we could gather more data.

Prevent Overfitting#

If we see overfitting of our model, we could reduce its complexity.

HOW?

Note: Every model type has a way to reduce model complexity.
We just learnt Linear Reg. that’s why we will concentrate on the Regularization of those models for now.
../_images/9adc643c04394ab4e11f9e38da33c63bdcc65483230cfcf4c88215afc9c8c252.png

Prevent Overfitting#

If we see overfitting of our model, we could reduce its complexity.

HOW?

  • reduce amount of features

../_images/4501b4338152c6da5964d95dd7e2969a71f40dcb954087cf832a384281b9b6d5.png

Prevent Overfitting#

If we see overfitting of our model, we could reduce its complexity.

HOW?

  • reduce amount of features

  • make the model less susceptible to data by reducing the influence of features
    → smaller coefficients

../_images/cfff38ada55638e143d69eb88a31fa90b539d5331b90c4af07b378621ef14799.png

Prevent Overfitting with Regularization#

BOTH can be achieved with regularizing a model:

  • reduce amount of features

  • make the model less susceptive of data by reducing the influence of features (smaller coefficients)

Part 3#

Regularization#

Regularization#

Regularization conceptually uses a constraint to prevent coefficients from getting too large, at a small cost in overall accuracy. With the aim to get models that generalize better on new data.

Note: Even with linear models, it can be useful to regularise them. Because they have a tendency to trace outliers in the training data.

Add Constraint#

We can add this constraint directly to our Loss function (t becomes alpha (or lambda))

\[\begin{split}\begin{align} Ridge (L2): J(b)&=\frac{1}{n}\sum{\big(y-(b_{0}+b_{1}x_{1}+b_{2}x_{2})\big)}^{2} + \alpha\,(b_{1}^2+b_{2}^2) \\ Lasso (L1): J(b)&=\frac{1}{n}\sum{\big(y-(b_{0}+b_{1}x_{1}+b_{2}x_{2})\big)}^{2} + \alpha\,(|b_{1}|+|b_{2}|) \end{align}\end{split}\]

Alpha is a hyperparameter. Before training the model we need to set it.

Add constraint#

We can add this constraint directly to our Loss function (t becomes alpha (or lambda))

\[\begin{split}\begin{align} Ridge (L2): J(b)&=\frac{1}{n}\sum{\big(y-(b_{0}+b_{1}x_{1}+b_{2}x_{2})\big)}^{2} + \alpha\,(b_{1}^2+b_{2}^2) \\ Lasso (L1): J(b)&=\frac{1}{n}\sum{\big(y-(b_{0}+b_{1}x_{1}+b_{2}x_{2})\big)}^{2} + \alpha\,(|b_{1}|+|b_{2}|) \end{align}\end{split}\]

What happens if we set alpha to 0?
What happens if we set alpha to a very high value?

Sklearn code for regularization#

check sklearn documentation here

ridge_mod = Ridge(alpha=1.0)  #adjust the alpha level
ridge_mod.fit(X, y)
ridge_mod.predict(X)

Some different alpha values#

We have to test some values for alpha and check which give us best results on unseen data

../_images/1a06f88f453f29ee43ad03622d96f21392c1503461df4898bfef83467e0e6bf8.png

Ridge Regression#

  • Also called L2 Regularization / l2 norm

  • the regularization term forces the parameter estimates to be as small as possible - weight decay

\[J(b)=\frac{1}{n} \sum{(y-\hat{y}(b))^2}+\alpha\sum{b_{i}^2}\]

Lasso Regression#

Least Absolute Shrinkage and Selection Operator

  • Also called L1 Regression / l1 norm

  • Tends to eliminate weights = it automatically performs feature selection

\[J(b)=\frac{1}{n} \sum{(y-\hat{y}(b))^2}+\alpha\sum{|b_{i}|}\]

Ridge vs Lasso#

../_images/28474ab67de107a1f495744e275de6e26586cff86abd9271f9c6d5cb8d2e1730.png

Elastic Net - Mixing Lasso and Ridge#

  • Regularization term is weighted average of Ridge and Lasso Regularization term

  • When r = 0 it is equivalent to Ridge, if r = 1 it is equivalent to Lasso Regression

  • Preferable to Lasso when features are highly correlated or to Ridge for high-dimensional data (more features than observations)

\[J(b)=\frac{1}{n}\sum{(y-\hat{y}(b))}^{2}+\alpha\,r \sum{|b_{i}|+\alpha\,(1-r)} \sum{b_{i}^2}\]

References#