Perceptron: the main component of neural networks

By Roberto Lopez, Artelnics.
Natural selection image

One of the hotests topics of artificial intelligence are neural networks. Neural Networks are computational models based on the structure of the brain. These are information processing structures whose most significant property is their ability to learn from data. These techniques have achieved great success in domains ranging from marketing to engineering.

There are many different types of neural networks, from which the multilayer perceptron is the most important one. The characteristic neuron model in the multilayer perceptron is the so called perceptron. In this article we will explain the mathematics on this neuron model.

Perceptron elements

As we have said, a neuron is the main component of a neural network, and the perceptron is the most used model. The following figure is a graphical representation of a perceptron.

Neuron model

In the above neuron we can see the following elements:

  • The inputs (x1,...,xn).
  • The bias b and the synaptic weights (w1,...,wn).
  • The combination function, c(·).
  • The activation function a(·).
  • The output y.

As an example, consider the neuron in the next figure, with three inputs. It transforms the inputs x=(x1, x2, x3) into a single output y.

Neuron example

In the above neuron we can see the following elements:

  • The inputs (x1, x2, x3).
  • The neuron parameters, which are the set b=-0.5 and w=(1.0,-0.75,0.25).
  • The combination function, c(·), which merges the inputs with the bias and the synaptic weights.
  • The activation function, which is set to be the hyperbolic tangent, tanh(·), and takes that combination to produce the output from the neuron.
  • The output y.

Neuron parameters

The parameters of the neuron consist of a bias and a set of synaptic weights.

  • The bias b is a real number.
  • The synaptic weights w=(w1,...,wn) is a vector of size the number of inputs.
Therefore, the total number of parameters in this neuron model is 1+n, being n the number of inputs in the neuron.

Consider the perceptron of the example above. That neuron model has a bias and 3 synaptic weights:

  • The bias is b = -0.5.
  • The synaptic weight vector is w=(1.0,-0.75,0.25).
The number of parameters in this neuron is 1+3=4.

Combination function

The combination function takes the input vector x to produce a combination value, or net input, c. In the perceptron, the combination is computed as the bias plus the linear combination of the synaptic weights and the inputs,

c = b + ∑ wi· xi           i=1,...,n.

Note that the bias increases or reduces the net input to the activation function, depending on whether it is positive or negative, respectively. The bias is sometimes represented as a synaptic weight connected to an input fixed to +1.

Consider the neuron of our example. The combination value of this perceptron for an input vector x = (-0.8,0.2,-0.4) is

c = -0.5 + (1.0·-0.8) + (-0.75·0.2) + (0.25·-0.4)
= -1.55

Activation function

The activation function will define the output from the neuron in terms of its combination. In practice, we can consider many useful activation functions. Three of the most used are the logistic, the hyperbolic tangent and the linear functions. Other activation functions which are not derivable, such as the threshold, are not considered here.

The logistic function has a sigmoid shape. This activation is a monotonous crescent function which exhibits a good balance between a linear and a non-linear behavior. It is defined by

a = 1/(1+exp(-c))

The logistic function is represented in the next figure.

Logistic activation function

As we can see, the image of the logistic function is (0,1). This is a good property for classification applications, because the outputs here can be interpreted in terms of probabilities.

The hyperbolic tangent is also a sigmoid function very used in the neural networks field. It is very similar to the logistic function. The main difference is that the image of the hyperbolic tangent is (-1, 1). The hyperbolic tangent is defined by

a = tanh(c)

The hyperbolic tangent is represented the next figure.

Hyperbolic tangent activation function

The hyperbolid tangent function is very used in approximation applications.

For the linear activation function we have

a = c

Thus, the output of a neuron with linear activation function is equal to its combination. The linear activation function is plotted in the following figure.

Linear activation function

The linear activation function is also very used in approximation applications.

In our example, the combination value is c = -1.55. As the chosen function is the hyperbolic tangent, the activation of this neuron is

a = tanh (-1.55)
= -0.91

Output function

The output calculation is the most important function in the perceptron. Given a set of input signals to the neuron, it coputes the output signal from it. The output function is represented in terms of composition of the combination and the activation functions. The next figure is an activity diagram of how the information is propagated in the perceptron.

Propagation

Therefore, the final expression of the output from a neuron as a function of the input to it is

y = a (b+w·x)

Consider the perceptron of our example. If we apply an input x = (-0.8,0.2,-0.4), the output y will be the following

y= tanh(-0.5 + (1.0·-0.8) + (-0.75·0.2) + (0.25·-0.4))
= tanh(-1.55)
= -0.91

As we can see, the output function merges the combination and the activation functions.

Conclusions

A neuron is a mathematical model of the behavior of a single neuron in a biological nervous system.

A single neuron can solve some very simple learning tasks, but the power of neural networks comes when many of them are connected in a network architecture. The architecture of an artificial neural network refers to the number of neurons and the connections between them. The following figure shows a feed-forward network architecture of neurons.

Deep Neural Network

Although in this post we have seen the functioning of the perceptron, there are other neuron models which have different characteristics and are used for different purposes. Some of them are the scaling neuron, the principal components neuron, the unscaling neuron or the probabilistic neuron. In the above picture, scaling neurons are depicted in yellow and unscaling neurons in red.