Urinary logo

Urinary inflammation diagnosis

By Roberto Lopez, Artelnics.

In this tutorial a classification application in medicine is solved by means of a neural network. In particular, the goal is to diagnose acute inflammations/nephritises of urinary bladder. The data for this problem has been taken from the UCI Machine Learning Repository.

Inflammation of urinary bladder image
Inflammation of urinary bladder.

Contents:

  1. Data set
  2. Neural network
  3. Loss index
  4. Training strategy
  5. Testing analysis
  6. Model deployment

1. Data set

The goal of this study is to obtain a model that can diagnose the disease of the acute inflammations of urinary bladder. This data set could be also used to diagnose the acute nephritises.

The data was created by a medical expert as a data set to test the expert system, which will perform the presumptive diagnosis of two diseases of urinary system. The basis for rules detection was Rough Sets Theory. Each instance represents an potential patient.

As the objective is to get a model that can diagnose the first of the diseases, the variable of acute nephritises diagnosis will be set as unused.

Variables table
Variables table.

The next figure shows the data set page in Neural Designer. It contains four sections:

  1. Data file.
  2. Variables information.
  3. Instances information.
  4. Missing values information

Data set page screenshot
Data set page.

Neural Designer shows a preview of the data file and says that the number of columns is 8 and the number of rows is 120.

The instances are divided into a training, a selection and a testing subsets. They represent 60%(72) , 20% (24) and 20% (24) of the original instances, respectively, and have been splitted at random.

2. Neural network

The second step is to choose a network architecture to represent the classification function. For classification problems, it is composed by:

  • Inputs.
  • Scaling layer.
  • Neural network.
  • Probabilistic layer.
  • Outputs.

The next figure shows the neural network page in Neural Designer.

Neural network page screenshot
Neural network page.

The scaling layer section contains information about the method for scaling the input variables and the statistic values to be used by that method. In this example, we will use the minimum and maximum method for scaling the inputs. The mean and standard deviation would also be appropriate here.

In this case, the neural network structure has 6 inputs, 6 hidden preceptrons and 1 output. This neural network can be denoted as 6:6:1. The next image represents it.

Neural network graph
Neural network graph.

3. Loss index

The third step is to set the loss index, which is composed by:

  • Error term.
  • Regularization term.

The error term chosen for this application is the normalized squared error.

On the other hand, the regularization term is the neural parameters norm. The weight for this term is 0.001. Regularization has two effects here:

  • it makes the model to be stable, without oscilations and
  • it avoids saturation of the logistic activation functions.

The learning problem can be stated as to find a neural network which minimizes the loss index, i.e., a neural network that fits the data set (objective) and that does not oscillate (regularization).

4. Training strategy

The next step in solving this problem is to assign the training strategy.

The next figure shows the training strategy page in Neural Designer.

Training strategy page screenshot
Training strategy page.

The neural network is trained in order to obtain the best possible performance.

The next table shows the training results by the quasi-Newton method. We can see that the performance and generalization performance are small and the gradient norm is almost zero.

Training results
Training results.

5. Testing analysis

The last step is to validate the generalization performance of the trained neural network. To validate a classification technique we need to compare the values provided by this technique to the actually observed values.

The following table contains the elements of the confusion matrix. The element (0,0) contains the true positives, the element (0,1) contains the false positives, the element (1,0) contains the false negatives, and the element (1,1) contains the true negatives for the variable diagnose. The number of correctly classified instances is 24, and the number of misclassified instances is 0.

Confusion matrix
Confusion matrix.

We can also perform a ROC curve analysis. ROC curve is computed by plotting in the x-axis the 1-specificity and in the y-axis the sensitivity for different thresholds. ROC curve for a perfect classifier passes through the upper left corner, i.e., the point (0,1), which has 100% sensitivity and 100% specificity. In consequence, the closer to upper left corner ROC curve, the better the discrimination capacity. This can be also measured with the area under curve (AUC) parameter. For a perfect classifier the AUC is 1. The next figure shows the results of this analysis in this case.

ROC curve and AUC
ROC curve and AUC.

The area under curve is 1. These results illustrate the good perfomance of the model.

6. Model deployment

The neural network is now ready to predict outputs for inputs that it has never seen.

The "Calculate outputs" task will diagnose inflammation of urinary bladder from the new values that we will type in the dialog. The next figure shows the dialog where the user types the input values.

Inputs dialog
Inputs dialog.

Then the prediction is written in the viewer.

Inputs-outputs table
Inputs-outputs table.

The "Write expression" task exports to the report the mathematical expression of the trained and tested neural network. That expression is listed below.


				scaled_Temperature=2*(Temperature-35.5)/(41.5-35.5)-1;
				scaled__Occurrence_of_nausea=2*(_Occurrence_of_nausea-0)/(1-0)-1;
				scaled__Lumbar_pain=2*(_Lumbar_pain-0)/(1-0)-1;
				scaled__Urine_pushing=2*(_Urine_pushing-0)/(1-0)-1;
				scaled__Micturition_pains=2*(_Micturition_pains-0)/(1-0)-1;
				scaled__Burning_of_urethra=2*(_Burning_of_urethra-0)/(1-0)-1;
				y_1_1=Logistic(-0.393074
				+0.436862*scaled_Temperature
				-0.162143*scaled__Occurrence_of_nausea
				+1.82425*scaled__Lumbar_pain
				-2.13948*scaled__Urine_pushing
				-1.48992*scaled__Micturition_pains
				+0.437234*scaled__Burning_of_urethra);
				y_1_2=Logistic(-0.281653
				+0.36665*scaled_Temperature
				-0.244955*scaled__Occurrence_of_nausea
				+1.53509*scaled__Lumbar_pain
				-1.84551*scaled__Urine_pushing
				-1.4913*scaled__Micturition_pains
				+0.262788*scaled__Burning_of_urethra);
				y_1_3=Logistic(-0.161413
				+0.237693*scaled_Temperature
				-0.268932*scaled__Occurrence_of_nausea
				+1.04934*scaled__Lumbar_pain
				-1.30559*scaled__Urine_pushing
				-1.2122*scaled__Micturition_pains
				+0.12438*scaled__Burning_of_urethra);
				y_1_4=Logistic(-0.368302
				+0.407486*scaled_Temperature
				-0.0820764*scaled__Occurrence_of_nausea
				+1.68913*scaled__Lumbar_pain
				-2.05292*scaled__Urine_pushing
				-1.25322*scaled__Micturition_pains
				+0.411336*scaled__Burning_of_urethra);
				y_1_5=Logistic(-0.0206809
				+0.297066*scaled_Temperature
				-0.490444*scaled__Occurrence_of_nausea
				+1.23821*scaled__Lumbar_pain
				-1.59551*scaled__Urine_pushing
				-1.82247*scaled__Micturition_pains
				-0.123923*scaled__Burning_of_urethra);
				y_1_6=Logistic(-0.179451
				+0.204028*scaled_Temperature
				-0.22389*scaled__Occurrence_of_nausea
				+0.944336*scaled__Lumbar_pain
				-1.1985*scaled__Urine_pushing
				-1.08034*scaled__Micturition_pains
				+0.138344*scaled__Burning_of_urethra);
				non_probabilistic_ Inflammation of urinary bladder=Logistic(5.40262
				-2.66378*y_1_1
				-2.29017*y_1_2
				-1.59168*y_1_3
				-2.53106*y_1_4
				-1.71032*y_1_5
				-1.39144*y_1_6);
				( Inflammation of urinary bladder) = Probability(non_probabilistic_ Inflammation of urinary bladder);

				Logistic(x){
					return 1/(1+exp(-x))
				}

				Probability(x){
					if x < 0
						return 0
					else if x > 1
						return 1
					else
						return x
				}