Improve the performance of combined cycle power plants

End-to-end machine learning examples

A combined cycle power plant is composed of gas turbines, steam turbines and heat recovery steam generators.

In this type of plants, the electricity is generated by gas and steam turbines, which are combined in one cycle, and is transferred from one turbine to another.

While the vacuum is collected from and has effect on the steam turbine, the ambient variables effect the gas turbine performance.

The goal of this example is to model the energy generated as a function of exhaust vacuum and ambient variables, and use that model to improve the performance of the plant.

Contents:

  1. Application type
  2. Data set
  3. Neural network
  4. Training strategy
  5. Model selection
  6. Testing analysis
  7. Model deployment

1. Application type

This is an approximation project, since the variable to be predicted is continuous (energy production).

The basic goal here is to model the energy production, as a function of the environmental and control variables.

2. Data set

The data set contains three concepts:

The data file combined_cycle_power_plant.csv contains 9568 samples with 5 variables collected from a combined cycle power plant over 6 years, when the power plant was set to work with full load. The measurements were taken every second.

The variables, or features, are the folloing:

The instances are divided into a training, a selection and a testing subsets. They represent 60%, 20% and 20% of the original instances, respectively, and are splitted at random.

Calculating the data distributions helps us to check for the correctness of the available information and detect anomalies. The following chart shows the histogram for the variable energy_output.

As we can see, there are more scenarios where the energy produced is small than where it is big.

It is also interesting to look for dependencies between single input and single target variables. To do that, we can plot an inputs-targets correlations chart.

The highest correlation is yield for the temperature (in general, the more temperature, the less energy production).

Next we plot a scatter chart for the energy output and the exhaust vacuum.

As we can see, the energy outuput is highly correlated with the exhaust vacuum. In general, the more exhaust vacuum, the less energy production.

3. Neural network

The second step is to build a neural network that represents the approximation function. For approximation problems, it is usually composed by:

The neural network has 4 inputs (temperature, exhaust vacuum, ambient pressure and relative humidity) and 1 output (energy output).

The scaling layer contains the statistics of the inputs. As all inputs have normal distributions, we use the mean and standard deviation scaling method.

We use 2 perceptron layers here:

The unscaling layer contains the statistics of the outputs. As the outut has a normal distribution, we use the mean and standard deviation unscaling method.

The next graph represents the neural network for this example.

4. Training strategy

The fourth step is to select an appropriate training strategy. It is composed of two things:

The loss index defines what the neural network will learn. It is composed by an error term and a regularization term.

The error term chosen is the normalized squared error. It divides the squared error between the outputs from the neural network and the targets in the data set by a normalization coefficient. If the normalized squared error has a value of 1 then the neural network is predicting the data 'in the mean', while a value of zero means perfect prediction of the data. This error term does not have any parameters to set.

The regularization term is the L2 regularization. It is applied to control the complexity of the neural network by reducing the value of the parameters. We use a weak weight for this regularization term.

The optimization algorithm is in charge of searching for the neural network parameters that minimize the loss index. Here we chose the quasi-Newton method as optimization algorithm.

The following chart shows how the training (blue) and selection (orange) errors decrease with the epohs during the training process. The final values are trainig error = 0.057 NSE and selection error = 0.067 NSE, respectively.

5. Model selection

Model selection algorithms are used to improve the generalization performance of the neural network.

As the selection error that we have achieved so far is very small (0.067 NSE), this kind of algorithms are not necessary here.

6. Testing analysis

The purpose of testing analysis is to validate the generalization capabilities of the neural network. For that, we use the testing instances in the data set, which have never been used before.

A standard testing method in approximation applications is to perform a linear regression analysis between the predicted and the real energy output values.

For a perfect fit, the correlation coefficient R2 would be 1. As we have R2 = 0.968, the neural network is predicting very well the testing data.

7. Model deployment

In the model deployment phase, the neural network is used to predict outputs for inputs that it has never seen.

We can calculate the neural network outputs for a given set of inputs:

Directional outputs plot the neural network outputs through some reference point.

The next list shows the reference point for the plots.

Next, we define a reference point and see how the energy production varies with the exhaus vacuum around that point.

As we can see, reducing the exhaust vacuum increases the energy output.

The mathematical expression represented by the predictive model is listed next:

scaled_temperature = (temperature-19.6512)/7.45247;
scaled_exhaust_vacuum = 2*(exhaust_vacuum-25.36)/(81.56-25.36)-1;
scaled_ambient_pressure = (ambient_pressure-1013.26)/5.93878;
scaled_relative_humidity = (relative_humidity-73.309)/14.6003;

y_1_1 = tanh(-0.158471 + (scaled_temperature*0.200864) + (scaled_exhaust_vacuum*0.73313) 
                       + (scaled_ambient_pressure*-0.19189) + (scaled_relative_humidity*0.0133642));

y_1_2 = tanh(-0.290828 + (scaled_temperature*-0.020375) + (scaled_exhaust_vacuum*-0.263848) 
                       + (scaled_ambient_pressure*-0.227397)+ (scaled_relative_humidity*0.337468));

y_1_3 = tanh(0.574054 + (scaled_temperature*0.572764) + (scaled_exhaust_vacuum*-0.0264721) 
                      + (scaled_ambient_pressure*0.109944)+ (scaled_relative_humidity*0.00934301));

scaled_energy_output =  (0.162012+ (y_1_1*-0.382654) + (y_1_2*-0.126065) + (y_1_3*-0.748958));

energy_output = (0.5*(scaled_energy_output+1.0)*(495.76-420.26)+420.26);
        

References:

Related examples:

Related solutions: