Concrete is the most important material in construction. Quality of concrete mainly depends on the constituent materials and their proportions. The objective is to design concretes having some given properties. The result must be a product with the highest quality by following the specifications and reduced cost by using the exact mix.
Compressive strength is one of the most important properties of concrete. It is measured by breaking cylindrical concrete specimens in a compression-testing machine. Depending on the application (building, highway...) we will need a concrete with a specific compressive strength or another.
The first step is to prepare the data set, which is the source of information for the approximation problem.
A set of compressive strength tests has been performed in the laboratory for 425 concrete specimens with different ingredients.
The first step is to edit the data set information. It is composed of:
The following figure shows the data set page in Neural Designer.
Neural Designer shows a preview of the data file and says that the number of columns is 8 and the number of rows is 425.
The instances are divided into a training, a selection and a testing subsets. They represent 60% , 20% and 20% of the original instances, respectively, and have been splitted at random.
The task "Report data set" will show you some usefull information about the data set. For example the next table of variables.
The concrete compressive strength is a highly nonlinear function of age and ingredients. The objective is to model the compressive strength from these components.
The second step is to set the neural network stuff. For approximation project types, the neural network page is composed by:
The following figure shows the neural network page in Neural Designer.
The scaling layer section contains the statistics on the inputs calculated from the data file and the method for scaling the input variables. Here the minimum and maximum method has been set. Nevertheless, the mean and standard deviation method would produce very similar results.
In this case, the neural network structure has 7 inputs, 3 hidden preceptrons and 1 output. This neural network can be denoted as 7:3:1. The next image represents it.
The third step is to select an appropriate loss index, which defines what the neural network will learn. A general loss index for approximation is composed of two terms:
The following figure shows the loss index page in Neural Designer.
The objective term is to be the normalized squared error. It divides the squared error between the outputs from the neural network and the targets in the data set by a normalization coefficient. If the normalized squared error has a value of unity then the neural network is predicting the data 'in the mean', while a value of zero means perfect prediction of the data. This objective term does not have any parameters to set.
The neural parameters norm is used as regularization term. It is applied to control the complexity of the neural network by reducing the value of the parameters. The weight of this regularization term in the loss index is 0.001.
The learning problem can be stated as to find a neural network which minimizes the loss index, i.e., a neural network that fits the data set (objective) and that does not oscillate (regularization).
The next step in solving this problem is to assign the training strategy.
The next figure shows the training strategy page in Neural Designer.
The neural network is trained in order to obtain the best possible performance.
The next table shows the training results by the quasi-Newton method. We can see that the gradient norm is almost zero.
The best selection is achieved by using a model whose complexity is the most appropriate to produce an adequate fit of the data. The order selection is responsible of finding the optimal number of the hidden perceptrons number.
In this example, the order selection algorithm selected is the simulated annealing. It is evaluated with a maximum order of 15 hidden perceptrons and with a cooling rate of 0.8.
The output of the results shows the next graph with the losses for each order evaluated. The red line represents the selection loss, and the blue line symbolizes the training loss.
It also shows a table with the losses for the optimal order and some final states of the algorithm.
The algorithm selects the order with the minimum selection loss, and for greater values than this order the selection error increase because the overfitting of a complex model.
A standard method for testing the prediction capabilities is to compare the outputs from the neural network against an independent set of data.
The next plot shows the predicted compressive strength values versus the actual ones. As we can see, both values are very similar for the entire range of data. The correlation coefficient is close to 1, which indicates that there is a good correlation.
The next table lists the linear regression parameters for the scaled output compressive_strength.
It is also convenient to explore the errors made by the neural network on single testing instances. In this example, some outliers have been removed in order to achieve the best possible performance. The mean error is 5.53%, with a standard deviation of 3.69%, which is a good value for this kind of applications.
Once we know that the neural network can predict the compressive strength accurately, it can be used to design concretes with given properties. The following listing is the mathematical expression represented by the predictive model.
The formula from below can be exported to the software tool required by the customer.
The purpose of improving quality of concrete was to help construction companies to obtain the best product suited to their needs at minimum cost. We have used a neural network to model 425 specimens of concrete, in order predict the compressive strength as a function of the constituent materials and their proportions.
To conclude, neural networks are a simple and efficient method which can bring a competitive advantage to your business.