# The Intuition behind the Universal Approximation Theorem for Neural Networks

## Can neural networks approximate any non-linear function?

In this article, I will provide an intuitive explanation of the Universal Approximation Theorem for neural networks.

Unlike general machine learning algorithms, neural networks can handle complex non-linear problems.

# The Universal Approximation Theorem

The Universal Approximation Theorem states that a neural network with at least one hidden layer of a sufficient number of neurons, and a non-linear activation function can approximate any continuous function to an arbitrary level of accuracy.

In other words, a neural network can fit any function to an arbitrary level of accuracy. That’s why neural networks are called **universal approximators.**

# Assumptions

The Universal Approximation Theorem assumes that:

- There is enough data to reasonably train the network. Generally, neural networks perform well with large amounts of data.