Health logo

Breast cancer diagnosis

By Roberto Lopez, Artelnics.

In this tutorial a classification application in medicine is solved by means of a neural network. The data for this problem has been taken from the UCI Machine Learning Repository.

The aim of this classification problem is to assess whether a lump in a breast could be malignant (cancerous) or benign (non-cancerous) from digitized images of a fine-needle aspiration biopsy. The following figure illustrates this example.

Breast cancer picture
Breast cancer diagnosis.

The breast cancer database used here was obtained from the University of Wisconsin Hospitals, Madison from Dr. William H. Wolberg.

Contents:

  1. Data set
  2. Neural network
  3. Loss index
  4. Training strategy
  5. Testing analysis
  6. Model deployment

1. Data set

The first step is to prepare the data file, which is the source of information for the classification problem. The breastcancer.dat file contains the data for this application. The input to Neural Designer is a data set, which can have different formats (CSV, XLS, etc.). Decimal marks should be points, not commas. In classification project type, target variables can only have two values: 0 (false) or 1 (true). The following listing is a preview of the data file. The number of instances (rows) in the data set is 683, and the number of variables (columns) is 10.

Breast cancer dataset screenshot
Breast cancer dataset.

The next figure shows the data set tab in Neural Designer. It contains three sections:

  1. Data file.
  2. Variables information.
  3. Instances information.
  4. Missing values information.

Data set page screenshot
Data set page.

To set a data file click on the "Import data file" button or "import database" button (depending on the type of data), and select it through the file dialog which appears. If the data has a correct format, the first, second and last instances will be shown in the data preview table. Also, the numbers of variables and instances are depicted below right.

Then the information about the variables is edited. The number of input variables, or attributes, for each sample is 9. All input variables are numeric-valued, and represent measurements from digitized images of a fine-needle aspiration biopsy. The number of target variables, is 1, and represents the absence or presence of cancer in an individual. This is a binary classification application, with one target which represents two classes. The following list summarizes the variables information:

  1. clump_thickness: (1-10). Benign cells tend to be grouped in monolayers, while cancerous cells are often grouped in multilayers.
  2. cell_size_uniformity: (1-10). Cancer cells tend to vary in size and shape. That is why these parameters are valuable in determining whether the cells are cancerous or not.
  3. cell_shape_uniformity: (1-10). Uniformity of cell size/shape: Cancer cells tend to vary in size and shape. That is why these parameters are valuable in determining whether the cells are cancerous or not.
  4. marginal_adhesion: (1-10). Normal cells tend to stick together. Cancer cells tends to loose this ability. So loss of adhesion is a sign of malignancy.
  5. single_epithelial_cell_size: (1-10). It is related to the uniformity mentioned above. Epithelial cells that are significantly enlarged may be a malignant cell.
  6. bare_nuclei: (1-10). This is a term used for nuclei that is not surrounded by cytoplasm (the rest of the cell). Those are typically seen in benign tumours.
  7. bland_chromatin: (1-10). Describes a uniform "texture" of the nucleus seen in benign cells. In cancer cells the chromatin tend to be more coarse.
  8. normal_nucleoli: (1-10). Nucleoli are small structures seen in the nucleus. In normal cells the nucleolus is usually very small if visible at all. In cancer cells the nucleoli become more prominent, and sometimes there are more of them.
  9. mitoses: (1-10). Cancer is essentially a disease of uncontrolled mitosis.
  10. diagnose: (0 or 1). Benign (non-cancerous) or malignant (cancerous) lump in a breast.

Finally, the use of all instances is set. Note that each instance contains the input and target variables of a different patient. The data set is divided into a training, a validation and a testing subsets. 60% of the instances will be assigned for training, 20% for generalization and 20% for testing. Note that this data set has many repeated instances, which will not be used, since they provide redundant information.

Once the data set page has been edited we are ready to run a few related tasks. With that, we check again the provided information and make sure that the data has good quality. Some data set tasks also perform minor adjustments to the variables information or the instances information sections.

The "Calculate data statistics" task draws a table with the minimums, maximums, means and standard deviations of all variables in the data set. The next figure depicts that values. All variables range from 1 to 10. On the other hand, note that the mean of all variables is less than 5. Also note that the input variable with the smallest standard deviation is "mitoses".

Data statistics table
Data statistics.

The "Calculate data histograms" task draws a histogram for each variable to see how they are distributed. The following figures show the histograms with ten bins for two input variables, clump thickness and mitosis. The clump thickness histogram is well distributed, but the mitosis histogram has many instances in the first bin.

Clump thickness histogram
Clump thickness histogram.

Mitoses histogram
Mitoses histogram.

The next chart shows the number of instances belonging to each class in the data set. The number of instances with negative Diagnose (blue) is 444, and the number of instances with positive Diagnose (purple) is 239.

Target class distribution pie chart
Target class distribution.

2. Neural network

The second step is to choose a network architecture to represent the classification function. For this class of applications, the neural network page is composed by:

  • Inputs.
  • Scaling layer.
  • Learning layers.
  • Probabilistic layer.
  • Outputs.

The following figure shows the neural network tab in Neural Designer.

Neural network page screenshot
Neural network page.

In the inputs section, the basic information about that variables is set. By default, the names, units and descriptions are those edited in the data set page for the input variables.

The scaling layer section contains the statistics on the inputs calculated from the data file and the method for scaling the input variables. Here the minimum and maximum method has been set. Nevertheless, the mean and standard deviation method would produce very similar results.

A multilayer perceptron with a logistic hidden layer and a logistic output layer is used. Note that, since the logistic function ranges from 0 to 1, the outputs of this multilayer perceptron can be interpreted as probabilities. The neural network must have 9 inputs, since there are eight input variables, and 1 output, since there is one target variable. As an initial guess, we use 6 neurons in the hidden layer. This neural network can be denoted as a 9:6:1 multilayer perceptron.

The probabilistic layer only contains the method for interpreting the outputs as probabilities. As the number of outputs is one, the softmax and competitive methods would not work. Indeed, as the sum of all outputs from a probabilistic layer must be 1, that two methods would always yield 1 here, since there is only one output. Therefore the no probabilistic method must be used for binary classification applications. Moreover, as the activation function from the output layer is the logistic, that output can already be interpreted as a probability of class membership.

Finally, In the outputs section, the basic information about that variables is set. As for the inputs, the default names, units and descriptions are those edited in the data set page for the target variables.

The next figure is a graphical representation of this neural network for medical diagnose.

Neural network graph
Neural network graph.

It defines a family V of parameterized functions y(x) of dimension s = 67, which is the number of free parameters. Elements V are of the form

				diagnosis = function(variable1, variable_n)
				

3. Loss index

The third step is to set the loss index. A general loss index for classification is composed of two terms:

  1. An error term.
  2. A regularization term.

The following figure shows the loss index tab in Neural Designer.

Loss index page screenshot
Loss index page.

The objective term is to be the normalized squared error. It divides the squared error between the outputs from the neural network and the targets in the data set by a normalization coefficient. If the normalized squared error has a value of unity then the neural network is predicting the data 'in the mean', while a value of zero means perfect prediction of the data. This objective term does not have any parameters to set.

The neural parameters norm is used as regularization term. It is applied to control the complexity of the neural network by reducing the value of the parameters. The weight of this regularization term in the loss index is 0.001.

The learning problem can be stated as finding a neural network which minimizes the loss index, i.e., a neural network that fits the data set (objective) and that does not oscillate (regularization).

4. Training strategy

The fourth step is to choose a training algorithm for solving the reduced function optimization problem. We will use the quasi-Newton method for training.

The following figure shows the training strategy tab in Neural Designer.

Training strategy page screenshot
Training strategy page.

It is very easy for gradient algorithms to get stuck in local minima when learning multilayer perceptron weights. This means that we should always repeat the learning process from several different starting positions.

The following chart shows how the performance decreases with the iterations during the training process. The initial value is 1.88731, and the final value after 102 iterations is 0.0360315.

Performance history plot
Performance history.

The next table shows the training results by the quasi-Newton method. They include some final states from the neural network, the loss index and the training algorithm. The parameters norm is not very big, the performance and generalization performance are small and the gradient norm is almost zero.

Training results table
Training results.

5. Testing analysis

The last step is to validate the generalization performance of the trained neural network. To validate a classification technique we need to compare the values provided by this technique to the actually observed values.

The following table contains the elements of the confusion matrix. The element (0,0) contains the true positives, the element (0,1) contains the false positives, the element (1,0) contains the false negatives, and the element (1,1) contains the true negatives for the variable diagnose. The number of correctly classified instances is 166, and the number of misclassified instances is 4.

Confusion matrix
Confusion matrix.

The classification accuracy, error rate, sensitivity, specifity positive likelihood and negative likelihood are parameters for testing the performance of a classification problem with two classes. The classification accuracy is the ratio of instances correctly classified. The error rate is the ratio of instances misclassified. The sensitivity, or true positive rate, is the proportion of actual positive which are predicted positive. The specifity, or true negative rate, is the proportion of actual negative which are predicted negative. The positive likelihood is the likelihood that a predicted positive is an actual positive. The negative likelihood is the likelihood that a predicted negative is an actual negative. That values are computed through the "Calculate binary classification tests" task.

Binary classification tests table
Binary classification tests.

6. Model deployment

Once the generalization performance of the neural network has been tested, the neural network can be saved for future use in the so called production mode.

We can diagnose a patient by running the "Calculate outputs" tasks. For that we need to edit the input variables through the corresponding dialog.

Inputs dialog
Inputs dialog.

Then the diagnose is written in the viewer.

Diagnose value table
Diagnose value.

The mathematical expression represented by the neural network is written below. It takes the inputs clump_thickness, cell_size_uniformity, cell_shape_uniformity, marginal_adhesion, single_epithelial_cell_size, bare_nuclei, bland_chromatin, normal_nucleoli and mitoses to produce the output diagnose. For classification problems, the information is propagated in a feed-forward fashion through the scaling layer, the perceptron layers and the probabilistic layer. This expression can be exported anyware, for instance, a dedicated diagnosis software to be used by doctors.


				scaled_Clump_thickness=2*(Clump_thickness-1)/(10-1)-1;
				scaled_Cell_size_uniformity=2*(Cell_size_uniformity-1)/(10-1)-1;
				scaled_Cell_shape_uniformity=2*(Cell_shape_uniformity-1)/(10-1)-1;
				scaled_Marginal_adhesion=2*(Marginal_adhesion-1)/(10-1)-1;
				scaled_Single_epithelial_cell_size=2*(Single_epithelial_cell_size-1)/(10-1)-1;
				scaled_Bare_nuclei=2*(Bare_nuclei-1)/(10-1)-1;
				scaled_Bland_chromatin=2*(Bland_chromatin-1)/(10-1)-1;
				scaled_Normal_nucleoli=2*(Normal_nucleoli-1)/(10-1)-1;
				scaled_Mitoses=2*(Mitoses-1)/(10-1)-1;
				y_1_1=Logistic(0.357211
				+3.37008*scaled_Clump_thickness
				+0.757582*scaled_Cell_size_uniformity
				+0.724625*scaled_Cell_shape_uniformity
				-1.06308*scaled_Marginal_adhesion
				+2.03671*scaled_Single_epithelial_cell_size
				+1.66515*scaled_Bare_nuclei
				+0.642097*scaled_Bland_chromatin
				-2.48428*scaled_Normal_nucleoli
				+2.16099*scaled_Mitoses);
				y_1_2=Logistic(0.285135
				-1.379*scaled_Clump_thickness
				-0.16061*scaled_Cell_size_uniformity
				-0.490253*scaled_Cell_shape_uniformity
				-2.55878*scaled_Marginal_adhesion
				-3.47714*scaled_Single_epithelial_cell_size
				+3.91216*scaled_Bare_nuclei
				+0.121383*scaled_Bland_chromatin
				-3.13551*scaled_Normal_nucleoli
				-0.192346*scaled_Mitoses);
				y_1_3=Logistic(-0.34082
				+0.833877*scaled_Clump_thickness
				+1.20211*scaled_Cell_size_uniformity
				+0.662956*scaled_Cell_shape_uniformity
				+0.444844*scaled_Marginal_adhesion
				+0.211779*scaled_Single_epithelial_cell_size
				+0.805764*scaled_Bare_nuclei
				+0.250091*scaled_Bland_chromatin
				+0.481421*scaled_Normal_nucleoli
				-0.0697733*scaled_Mitoses);
				y_1_4=Logistic(4.07472
				+3.36505*scaled_Clump_thickness
				+0.208682*scaled_Cell_size_uniformity
				-1.4507*scaled_Cell_shape_uniformity
				+3.21887*scaled_Marginal_adhesion
				+0.860408*scaled_Single_epithelial_cell_size
				+1.82887*scaled_Bare_nuclei
				+1.01442*scaled_Bland_chromatin
				+1.02464*scaled_Normal_nucleoli
				-0.465106*scaled_Mitoses);
				y_1_5=Logistic(-0.639444
				+1.07623*scaled_Clump_thickness
				+0.835292*scaled_Cell_size_uniformity
				-1.06735*scaled_Cell_shape_uniformity
				+0.654315*scaled_Marginal_adhesion
				+1.05936*scaled_Single_epithelial_cell_size
				-0.782495*scaled_Bare_nuclei
				+0.85018*scaled_Bland_chromatin
				+5.00135*scaled_Normal_nucleoli
				+1.6783*scaled_Mitoses);
				y_1_6=Logistic(0.510009
				+1.4033*scaled_Clump_thickness
				-2.7472*scaled_Cell_size_uniformity
				+2.22281*scaled_Cell_shape_uniformity
				+1.77617*scaled_Marginal_adhesion
				+6.85609*scaled_Single_epithelial_cell_size
				-6.07854*scaled_Bare_nuclei
				-0.441572*scaled_Bland_chromatin
				+1.94189*scaled_Normal_nucleoli
				-2.38021*scaled_Mitoses);
				non_probabilistic_Diagnose=Logistic(1.24433
				+5.41564*y_1_1
				-5.63565*y_1_2
				+0.481112*y_1_3
				+8.53126*y_1_4
				+4.91376*y_1_5
				-7.467*y_1_6);
				(Diagnose) = Probability(non_probabilistic_Diagnose);

				Logistic(x){
					return 1/(1+exp(-x))
				}

				Probability(x){
					if x < 0
						return 0
					else if x > 1
						return 1
					else
						return x
				}