Flower logo

Iris flowers classification

By Roberto Lopez, Artelnics.

This is perhaps the best known data set to be found in the classification literature. The aim is to classify iris flowers among three species (setosa, versicolor or virginica) from measurements of length and width of sepals and petals.

The iris data set contains 3 classes of 50 instances each, where each class refers to a type of iris plant. The central goal here is to design a model which makes good classifications for new data, in other words one which exhibits good generalization. The next figure is a picture of an iris flower of the versicolor specie.

Iris versicolor picture
Iris versicolor.

Contents:

  1. Data set
  2. Neural network
  3. Loss index
  4. Training strategy
  5. Testing analysis
  6. Model deployment

1. Data set

The first step is to prepare the data set, which is the source of information for the classification problem.

The file irisflowers.csv contains the data for this example in comma sepparated values (CSV) format. A sample of the contents of that file is listed below.

Iris flowers dataset screenshot
Iris flowers dataset.

Once the data file is ready, we import it in Neural Designer using the "Import data file" wizard.

The next figure shows the data set page in Neural Designer.

Data set page screenshot
Data set page.

It contains four sections:

  1. Data file.
  2. Variables.
  3. Instances.
  4. Missing values.

Neural Designer shows a preview of the data file and says that the number of columns is 5 and the number of rows is 150.

The variables are:

  1. sepal_length: Sepal length, in centimeters, used as input.
  2. sepal_width: Sepal width, in centimeters, used as input.
  3. petal_length: Petal length, in centimeters, used as input.
  4. petal_width: Petal width, in centimeters, used as input.
  5. setosa: Iris setosa, true or false, used as target.
  6. versicolour: Iris versicolour, true or false, used as target.
  7. virginica: Iris virginica, true or false, used as target.
Here the nominal variable Class has been codified as follows:
  • Setosa: 1 0 0.
  • Versicolor: 0 1 0.
  • Virginica: 0 0 1.

The instances are divided into a training, a selection and a testing subsets. They represent 60% (90), 20% (30) and 20% (30) of the original instances, respectively, and have been splitted at random.

The "Report data set" task transfers to Neural Viewer the information contained in the Data set page of Neural Editor.

Report data set task
Report data set task.

The "Calculate data statistics" task draws a table with the minimums, maximums, means and standard deviations of all variables in the data set. The next figure shows the data statistics.

Data statistics table
Data statistics.

Finally, the "Calculate data histograms" task draws a histogram for each variable to see how they are distributed. The user must specify here the number of bins for all the histograms. The next figure is the histogram for the first attribute.

Histogram for the variable Sepal length
Histogram for the variable Sepal length.

2. Neural network

The second step is to choose a network architecture to represent the classification function. For classification problems, it is composed by:

  • Inputs.
  • Scaling layer.
  • Neural network.
  • Probabilistic layer.
  • Outputs.

Note that on neural network page all settings for this example are the default values.

The inputs section contains information about the input variables in the neural network.

  1. Sepal length, in centimeters.
  2. Sepal width, in centimeters.
  3. Petal length, in centimeters.
  4. Petal width, in centimeters.

The scaling layer section contains information about the method for scaling the input variables and the statistic values to be used by that method. In this example, we will use the minimum and maximum method for scaling the inputs. The mean and standard deviation would also be appropriate here.

Since the number of input variabes is only 4, we won't apply principal components in this application.

The neural network must have four inputs, since there are four input variables; and three output neurons, since there are three target variables. We use one hidden layer of size five. This neural network can be denoted as 4:5:3. All the activation functions have been set to logistic. All the parameters are initialized at random with a normal distribution of mean 0 and standard deviation 1.

The probabilistic layer allows the outputs to be interpreted as probabilities, i.e., all outputs are between 0 and 1 and their sum is 1. The probabilistic method to be used is the softmax.

The outputs from this neural network are:

  1. iris_setosa, probability.
  2. iris_versicolour, probability.
  3. iris_virginica, probability.

The next figure is a graphical representation of the neural network for the iris flowers classification example, taken from the "Report neural network" task.

Neural network graph
Neural network graph.

This neural network defines a function of the form

				
				[setosa, versicolor, virginica] = function(sepal_lenght, sepal_width, petal_length, petal_width) 
				
				

The function above is parameterized by all the biases and synaptic weights in the neural network.

3. Loss index

The third step is to set the loss index, which is composed by:

  • Error term.
  • Regularization term.

The error term chosen for this application is the normalized squared error.

On the other hand, the regularization term is the neural parameters norm. The weight for this term is 0.001. Regularization has two effects here: (i) it makes the model to be stable, without oscilations and (ii) it avoids saturation of the logistic activation functions.

The learning problem can be stated as to find a neural network which minimizes the loss index, i.e., a neural network that fits the data set (objective) and that does not oscillate (regularization).

4. Training strategy

The next step in solving this problem is to assign the training strategy.

The next figure shows the training strategy page in Neural Designer.

Training strategy page screenshot
Training strategy page.

On the other hand, the quasi-Newton method is applied as the main training algorithm.

The following chart shows how the performance decreases with the iterations during the training process. The initial value is 1.21313, and the final value after 94 iterations is 0.0376633.

Loss history
Loss history.

The following table shows the training results for the problem considered here. Here the final parameters norm is not very big, the final performance is small, the final generalization performance is also small, and the final gradient norm is almost zero, the number of epochs and the training time in a PC.

Training results
Training results.

5. Testing analysis

The last step is to test the generalization performance of the trained neural network. Here we compare the values provided by this technique to the actually observed values.

Since the testing analysis does not depend on any parameter, there is not a page in Neural Designer for that component.

In the confusion matrix the rows represent the target classes and the columns the output classes for the testing target data set. The diagonal cells in each table show the number of cases that were correctly classified, and the off-diagonal cells show the misclassified cases. The next table shows the confusion elements for this application. The number of correctly classified instances is 28, and the number of misclassified instances is 2. In particular, the neural network has said that two flowers are virginica when they are actually versicolor. Also, note that the confusion matrix depends on the particular testing instances that we have.

Confusion matrix
Confusion matrix.

6. Model deployment

The neural network is now ready to predict outputs for inputs that it has never seen.

The "Calculate outputs" task will classify a given iris flower, from the lenghts and withs of its sepals and petals. The next figure shows the dialog where the user types the input values.

Inputs dialog
Inputs dialog.

The results from that task are written in the viewer. For this particular case, the neural network would clasiffy that flower as being of the virginica specie with 55% probability. The probability of being setosa is 22%, and the probability of being versicolor is also 23%.

Inputs-outputs table
Inputs-outputs table.

The "Write expression" task exports to the report the mathematical expression of the trained and tested neural network. That expression is listed below.


				scaled_sepal_length=2*(sepal_length-4.3)/(7.9-4.3)-1;
				scaled_sepal_width=2*(sepal_width-2)/(4.4-2)-1;
				scaled_petal_lenght=2*(petal_lenght-1)/(6.9-1)-1;
				scaled__petal_width=2*(_petal_width-0.1)/(2.5-0.1)-1;
				y_1_1=Logistic(2.67549
				+1.35875*scaled_sepal_length
				-2.97145*scaled_sepal_width
				+3.74352*scaled_petal_lenght
				+3.42884*scaled__petal_width);
				y_1_2=Logistic(-8.48912
				+0.284544*scaled_sepal_length
				-6.16183*scaled_sepal_width
				+2.86124*scaled_petal_lenght
				+16.6317*scaled__petal_width);
				non_probabilistic_Iris-setosa=Logistic(3.97567
				-8.31664*y_1_1
				-1.03271*y_1_2);
				non_probabilistic_Iris-versicolor=Logistic(-3.80883
				+10.0347*y_1_1
				-13.9238*y_1_2);
				non_probabilistic_Iris-virginica=Logistic(-6.80953
				+0.664209*y_1_1
				+13.6585*y_1_2);
				(Iris-setosa,Iris-versicolor,Iris-virginica) = Softmax(non_probabilistic_Iris-setosa,non_probabilistic_Iris-versicolor,non_probabilistic_Iris-virginica);

				Logistic(x){
					return 1/(1+exp(-x))
				}

				Softmax(x1, ..., xn){
					return (exp(x1)/(exp(x1)+...+exp(xn)),...,exp(xn)/(exp(x1)+...+exp(xn)))
				}

				

The data for this problem has been taken from the UCI Machine Learning Repository.