Machine learning examples

Diagnose breast cancer from fine-needle aspirate images

The aim of this example is to assess whether a lump in a breast could be malignant (cancerous) or benign (non-cancerous) from digitized images of a fine-needle aspiration biopsy.

The breast cancer database used here was obtained from the University of Wisconsin Hospitals, Madison from Dr. William H. Wolberg.

Fine needle aspiration

Contents:

  1. Application type.
  2. Data set.
  3. Neural network.
  4. Training strategy.
  5. Model selection.
  6. Testing analysis.
  7. Model deployment.
  8. Tutorial video.

1. Application type

This is a binary classification project, since the variable to be predicted has two values (malignant or benignant tumor).

The goal here is to model the probability that a tumor is malignant, conditioned on the fine needle aspiration test features.

2. Data set

The breast_cancer.csv file contains the data for this application. In classification project type, target variables can only have two values: 0 (false) or 1 (true). The number of instances (rows) in the data set is 683, and the number of variables (columns) is 10.

The number of input variables, or attributes, for each sample is 9. All input variables are numeric-valued, and represent measurements from digitized images of a fine-needle aspiration biopsy. The number of target variables is 1, and represents the absence or presence of cancer in an individual. The following list summarizes the variables information:

Finally, the use of all instances is set. Note that each instance contains the input and target variables of a different patient. The data set is divided into a training, a validation and a testing subsets. 60% of the instances will be assigned for training, 20% for generalization and 20% for testing.

Once the data set has been set, we are ready to perform a few related analytics. With that, we check the provided information and make sure that the data has good quality.

We can calculate the data statistics and draw a table with the minimums, maximums, means and standard deviations of all variables in the data set. The next table depicts that values.

All variables range from 1 to 10. On the other hand, note that the mean of all variables is less than 5. Also note that the input variable with the smallest standard deviation is "mitoses".

Also, we can calculate the distributions for all variable. The following figure shows a pie chart with the numbers of malignant (positives) and benignant (negatives) tumors in the data set.

As we can see, malignant tumors represent 35% of the samples and benignant tumors represent 65% of the samples, approximately.

The inputs-targets correlations might indicate us what factors are most influential for a tumor to be malignant or benignant.

Here, the most correlated variables with malignant tumors are bare nuclei, cell shape uniformity and cell size uniformity.

3. Neural network

The second step is to set a neural network to represent the classification function. For this class of applications, the neural network is composed by:

The scaling layer contains the statistics on the inputs calculated from the data file and the method for scaling the input variables. Here the minimum and maximum method has been set. Nevertheless, the mean and standard deviation method would produce very similar results.

Two perceptron layers with a logistic hidden layer and a logistic output layer are used. Note that, since the logistic function ranges from 0 to 1, the outputs from that layer can be interpreted as probabilities. The neural network must have 9 inputs, since there are eight input variables, and 1 output, since there is one target variable. As an initial guess, we use 3 neurons in the hidden layer.

The probabilistic layer only contains the method for interpreting the outputs as probabilities. Indeed, as the sum of all outputs from a probabilistic layer must be 1, that two methods would always yield 1 here, since there is only one output. Moreover, as the activation function from the output layer is the logistic, that output can already be interpreted as a probability of class membership.

The next figure is a graphical representation of this neural network for breast cancer diagnose.

4. Training strategy

The fourth step is to set the training strategy, which is composed of two terms:

The loss index is the weighted squared error with L2 regularization. This is the default loss index for binary classification applications.

The learning problem can be stated as finding a neural network which minimizes the loss index. That is, a neural network that fits the data set (error term) and that does not oscillate (regularization term).

The optimization algorithm that we use is the quasi-Newton method. This is also the standard optimization algorithm for this type of problems.

The following chart shows how the error decreases with the iterations during the training process. The final training and selection errors are training error = 0.054 WSE and selection error = 0.072 WSE, respectively.

5. Model selection

The objective of model selection is to find the network architecture with best generalization properties, that is, that which minimizes the error on the selection instances of the data set.

More specifically, we want to find a neural network with a selection error less than 0.072 WSE, which is the value that we have achieved so far.

Order selection algorithms train several network architectures with different number of neurons and select that with the smallest selection error.

The incremental order method starts with a small number of neurons and increases the complexity at each iteration. The following chart shows the training error (blue) and the selection error (orange) as a function of the number of neurons.

The figure below shows the final architecture for the neural network.

6. Testing analysis

The objective of testing analysis is to validate the generalization performance of the trained neural network. To validate a classification technique we need to compare the values provided by this technique to the actually observed values.

The following table contains the elements of the confusion matrix. This matrix contains the true positives, false positives, false negatives, and true negatives for the variable diagnose, respectively.

Predicted positive Predicted negative
Real positive 129 3
Real negative 1 37

The binary classification tests are parameters for measuring the performance of a classification problem with two classes:

7. Model deployment

Once the generalization performance of the neural network has been tested, the neural network can be saved for future use in the so-called model deployment mode.

We can diagnose new patients by calculating the neural network outputs. For that we need to know the input variables for them. An example is the following:

The mathematical expression represented by the neural network is written below. It takes the inputs clump_thickness, cell_size_uniformity, cell_shape_uniformity, marginal_adhesion, single_epithelial_cell_size, bare_nuclei, bland_chromatin, normal_nucleoli and mitoses to produce the output diagnose. For classification problems, the information is propagated in a feed-forward fashion through the scaling layer, the perceptron layers and the probabilistic layer.

scaled_clump_thickness = (clump_thickness-4.44217)/2.82076;
scaled_cell_size_uniformity = (cell_size_uniformity-3.15081)/3.06514;
scaled_cell_shape_uniformity = (cell_shape_uniformity-3.21523)/2.98858;
scaled_marginal_adhesion = (marginal_adhesion-2.83016)/2.86456;
scaled_single_epithelial_cell_size = (single_epithelial_cell_size-3.23426)/2.22309;
scaled_bare_nuclei = (bare_nuclei-3.54466)/3.64386;
scaled_bland_chromatin = (bland_chromatin-3.4451)/2.4497;
scaled_normal_nucleoli = (normal_nucleoli-2.86969)/3.05267;
scaled_mitoses = (mitoses-1.60322)/1.73267;

y_1_1 = Logistic (-1.35621+ (scaled_clump_thickness*-2.54409)+ (scaled_cell_size_uniformity*-5.01572)
+ (scaled_cell_shape_uniformity*-3.39576)+ (scaled_marginal_adhesion*-0.278873)+ (scaled_single_epithelial_cell_size*-2.61646)
+ (scaled_bare_nuclei*-5.51018)+ (scaled_bland_chromatin*-0.979982)+ (scaled_normal_nucleoli*-1.71412)+ (scaled_mitoses*0.410197));
non_probabilistic_diagnose = Logistic (3.94959+ (y_1_1*-9.14654));

diagnose = Probability(non_probabilistic_diagnose);

The above expression can be exported anywhere, for instance, a to a dedicated diagnosis software to be used by doctors.

8. Tutorial video

You can watch the step by step tutorial video below to help you complete this Machine Learning example for free using the easy-to-use machine learning software Neural Designer.

References:

Related examples:

Related solutions: