Member-only story
3 Amazing Benefits of Activation Functions in Neural Networks
Neural Networks and Deep Learning Course: Part 18

In Part 1, I mentioned that the activation function is the non-linear component inside a perceptron. There are two types of functions inside a perception: Linear and Non-linear.
The linear function calculates the weighted sum of inputs and the activation function is applied to that weighted sum of inputs.
In Part 5, I discussed 11 different types of activation functions with visual representations and their specific uses.
Usually, we don’t need an activation function in the input layer because the input layer just holds the input data and no calculation is performed there.
Using an activation function inside the hidden layer(s) and the output layer in a neural network is mandatory.
You may already know the main purpose of using activation functions in neural networks. Today, in this article, I will explain 3 amazing benefits of activation functions in neural networks.
Benefit 1: To introduce non-linearity to neural networks
This is the main benefit of activation functions in neural networks. The activation function in the hidden layer(s) enables…