Data Science 365

Bring data into actionable insights.

Follow publication

Member-only story

3 Amazing Benefits of Activation Functions in Neural Networks

Neural Networks and Deep Learning Course: Part 18

Rukshan Pramoditha
Data Science 365
Published in
3 min readJun 14, 2022
Image by dexmac from Pixabay

In Part 1, I mentioned that the activation function is the non-linear component inside a perceptron. There are two types of functions inside a perception: Linear and Non-linear.

The linear function calculates the weighted sum of inputs and the activation function is applied to that weighted sum of inputs.

In Part 5, I discussed 11 different types of activation functions with visual representations and their specific uses.

Usually, we don’t need an activation function in the input layer because the input layer just holds the input data and no calculation is performed there.

Using an activation function inside the hidden layer(s) and the output layer in a neural network is mandatory.

You may already know the main purpose of using activation functions in neural networks. Today, in this article, I will explain 3 amazing benefits of activation functions in neural networks.

Benefit 1: To introduce non-linearity to neural networks

This is the main benefit of activation functions in neural networks. The activation function in the hidden layer(s) enables…

Create an account to read the full story.

The author made this story available to Medium members only.
If you’re new to Medium, create a new account to read this story on us.

Or, continue in mobile web

Already have an account? Sign in

Data Science 365
Data Science 365
Rukshan Pramoditha
Rukshan Pramoditha

Written by Rukshan Pramoditha

3,000,000+ Views | BSc in Stats (University of Colombo, Sri Lanka) | Top 50 Data Science, AI/ML Technical Writer on Medium

No responses yet

Write a response