Machine learning examples

Tree wilt detection

Through image analysis with neural networks we can detect diseased trees, thus anticipating the imminent fall of them. The analysis of high-resolution photos can help us to remove and replant new ones.

This example uses remote sensing data for detecting diseased trees.

The data set consists of image segments, generated by segmenting the pan-sharpened image. The segments contain spectral information from the Quickbird multispectral image bands and texture information from the panchromatic (Pan) image band.

Tree logo

Contents:

  1. Application type.
  2. Data set.
  3. Neural network.
  4. Training strategy.
  5. Model selection.
  6. Testing analysis.
  7. Model deployment.

1. Application type

This is a classification project, since the variable to be predicted is binary (disease region or not).

The goal here is to model the probability that a region of trees presents wilt, conditioned on the image features.

2. Data set

The data set comprises a data matrix in which columns represent variables and rows represent instances.

The data file tree_wilt.csv contains the information for creating the model. Here, the number of variables is 6, and the number of instances is 574.

The total number of variables is 6:

The total number of instances is 574. They are divided into training, generalization and testing subsets. The number of training instances is 346 (60%), the number of selection instances is 114 (20%) and the number of testing instances is 114 (20%).

A statistical analysis is always mandatory, in order to detect possible issues related to the dataset. A common task to carry out before proceeding to configure the model is to check the data distribution. The chart below shows the distribution across the sample of the instances of the green variable.

As we can see, there clearly are outliers among our data. First we must get rid of this instances. The following chart displays the distribution of the green variable after clearing the data of outliers.

As expected, now the distribution of the green variable is correctly displayed. A uniform distribution of the data is always desired. This chart shows a normal distribution of the instances.

3. Neural network

Now we have to configure the neural network that represents the classification function.

The number of inputs is 5, and the number of outputs is 1. Therefore, our neural network will be composed of 5 scaling neurons and one probabilistic neuron. As a first guess, we will assume 3 hidden neurons in the perceptron layer

The binary probabilistic method will be set for this case, since here we have a binary classification model. Nevertheless, choosing the continuous probabilistic method would also be a correct choice.

The following picture shows a graph of the neural network for this example.

Neural network graph

4. Training strategy

The loss index defines the task that the neural network is required to accomplish. The normalized squared error with strong L2 regularization is used here.

The learning problem can be stated as to find a neural network which minimizes the loss index. That is, a neural network that fits the data set (error term) without undesired oscillations (regularization term).

The procedure used to carry out the learning process is called optimization algorithm. The optimization algorithm is applied to the neural network to obtain the minimum possible loss. The type of training is determined by the way in which the adjustment of the parameters in the neural network takes place.

The quasi-Newton method is used here as optimization algorithm in the training strategy.

The following chart shows how the training and selection errors decrease with the optimization algorithm epochs during the training process.

The final values are training error = 0.206 NSE and selection error = 0.288 NSE, respectively.

5. Model selection

The objective of model selection is to find the network architecture with best generalization properties, that is, that which minimizes the error on the selection instances of the data set.

More specifically, we want to find a neural network with a selection error less than 0.288, which is the value that we have achieved so far.

Order selection algorithms train several network architectures with different number of neurons and select that with the smallest selection error.

The incremental order method starts with a small number of neurons and increases the complexity at each iteration. The following chart shows the training error (blue) and the selection error (orange) as a function of the number of neurons.

After model selection, a optimum selection error of 0.263 NSE has been found, for a number of 2 hidden neurons. Below the final network architecture is displayed.

6. Testing analysis

The last step is to test the generalization performance of the trained neural network.

In the confusion matrix the rows represent the target classes and the columns the output classes for the testing target data set. The diagonal cells in each table show the number of cases that were correctly classified, and the off-diagonal cells show the misclassified cases. The next table shows the confusion elements for this application. The following table contains the elements of the confusion matrix.

Predicted positive Predicted negative
Real positive 42 (37.8%) 5 (4.5%)
Real negative 7 (6.31%) 57 (51.4%)

The next list depicts the binary classification tests for this application:

Therefore, we can state that this data implies a good performance of the predictive model.

7. Model deployment

The neural network is now ready to predict outputs for inputs that it has never seen.

Below, a specific prediction having determined values for the input variables of the model is shown.

The model predicts that the previous values correspond to a region of diseased trees.

The mathematical expression, represented by the neural network, which can be exported to any specific software, is written below.

scaled_glcm = (glcm-127.369)/10.3021;
scaled_green = (green-204.672)/22.6547;
scaled_red = (red-105.426)/23.3449;
scaled_nir = (nir-447.619)/143.758;
scaled_pan_band = (pan_band-20.5116)/6.32208;
y_1_1 = Logistic (2.48167+ (scaled_glcm*-1.12785)+ (scaled_green*-3.77149)+ (scaled_red*1.21979)+ (scaled_nir*1.95294)+ (scaled_pan_band*-0.324738));
y_1_2 = Logistic (-0.649461+ (scaled_glcm*-0.5216)+ (scaled_green*2.05632)+ (scaled_red*-4.68758)+ (scaled_nir*0.950784)+ (scaled_pan_band*-0.377092));
non_probabilistic_wilt = Logistic (-1.9049+ (y_1_1*5.65551)+ (y_1_2*-6.72356));
wilt = binary(non_probabilistic_wilt);

logistic(x){
   return 1/(1+exp(-x))
}

binary(x){
   if x < decision_threshold
       return 0
   else
       return 1
}

        

References

Related examples: