In this example an empirical model for the residuary resistance of sailing yachts as a function of hull geometry coefficients and the Froude number is constructed by means of a neural network.
Prediction of residuary resistance of sailing yachts at the initial design stage is of a great value for evaluating the ship's performance and for estimating the required propulsive power. Essential inputs include the basic hull dimensions and the boat velocity. The next figure illustrates this example.
The Delft data set comprises 308 full-scale experiments, which were performed at the Delft Ship Hydromechanics Laboratory. These experiments include 22 different hull forms, derived from a parent form closely related to the `Standfast 43' designed by Frans Maas.
Variations concern longitudinal position of the center of buoyancy, prismatic coefficient, length-displacement ratio, beam-draught ratio, and length-beam ratio. For every hull form 14 different values for the Froude number ranging from 0.125 to 0.450 are considered. As it has been said, the measured variable is the residuary resistance per unit weight of displacement.
In this example a neural network is trained with the Delf data set to provide an estimation of the residuary resistance per unit weight of displacement as a function of hull geometry coefficients and the Froude number.
This is an approximation project, since the variable to be predicted is continuous (residuary resistance).
The basic goal here is to model the residuary resistance of a yacht , as a function of its geometry and speed.
The first step is to prepare the data set, which is the source of information for the approximation problem. It contains the following three concepts:
The file yacht_hydrodynamics.csv contains the data for this example. Here the number of instances (rows) is 308, and the number of variables (columns) is 7.
The data set contains the followin variables:
Note that the variables use (Input, Target or Unused) must be selected carefully. Any application must have one or more inputs and one or more targets.
Finally, the instances divided into a training, a validation and a testing subsets. Here the instances are splitted at random with ratios 0.6, 0.2 and 0.2. More specifically, 186 instances are used for training, 61 for validation and 61 for testing.
Once the data set page has been edited we are ready to run a few related tasks. With that, we check that the provided information is of good quality.
We can calculate the data distributions and draws a histogram for each variable to see how they are distributed. The following figure shows the histogram for the resistance data, which is the only target.
As we can see, most of the data is concentrated at low resistance values.
The second step is to set the neural network stuff. For approximation project types, it is usuallly composed by:
The scaling layer section contains the statistics on the inputs calculated from the data file and the method for scaling the input variables. Here the minimum and maximum method has been set. Nevertheless, the mean and standard deviation method would produce very similar results.
A sigmoid hidden layer and a linear output layer of perceptrons will be used in this problem (this is the default in approximation). It must have 6 inputs, and 1 output neuron. While the numbers of inputs and output neurons are constrained by the problem, the number of neurons in the hidden layer is a design variable. Here we use 6 neurons in the hidden layer, which yields to 49 parameters. Finally, all the biases and synaptic weights in the neural network are initialized at random.
The unscaling layer contains the statistics on the outputs calculated from the data file and the method for unscaling the output variables. Here the minimum and maximum method will also be used.
The neural network for this example can be represented as the following diagram:
The function above is parameterized by all the biases and synaptic weights in the neural network, i.e, 49 parameters.
The next step is to select an appropriate training strategy, which defines what the neural network will learn. A general training strategy for approximation is composed of two terms:
The loss index chosen for this problem is the normalized squared error between the outputs from the neural network and the targets in the data set. On the other hand, no regularization will be used for this application.
The selected optimization algorithm is the quasi-Newton method.
The following table shows the training and validation errors for the three neural networks considered here. ET and EV represent the normalized squared errors made by the trained neural networks on the training and validation data sets, respectively. As we can see, the training error decreases with the complexity of the neural network. However, the election error shows a minimum value for the neural network with 9 hidden neurons. A possible explanation is that the lowest model complexity produces underfitting, and the highest model complexity produces overfitting.
In this way, the optimal number of neurons in the hidden layer turns out to be 9. This neural network is depicted in the next figure.
The next step is to perform a testing analysis to measure the generalization performance of the neural network. Here we compare the values provided by this technique to the actually observed values. Note that, since testing has not settings, there is not a testing analysis page in Neural Designer.
It is convenient to explore the errors made by the neural network on single testing instances. The absolute error is the difference between some target and its corresponding output. The relative error is the absolute error divided by the range of the variable. The percentage error is the relative error multiplied by 100.
The error statistics measure the minimums, maximums, means and standard deviations of the errors between the neural network and the testing instances in the data set. They provide a valuable tool for testing the quality of a model. The next figure shows the error data statistics for the resistance.
The error histograms show how the errors from the neural network on the testing instances are distributed. In general, a normal distribution for each output variable is expected here. The following chart depicts the errors distribution for the resistance.
A possible testing technique for the neural network model is to perform a linear regression analysis between the predicted and their corresponding experimental residuary resistance values, using an independent testing set. This analysis leads to a line y = a + bx with a correlation coefficient R2. In this way, a perfect prediction would give a = 0, b = 1 and R2 = 1.
The following figure illustrates a graphical output provided by this testing analysis. The predicted residuary resistances are plotted versus the experimental ones as open circles. The solid line indicates the best linear fit. The dashed line with R2 = 1 would indicate perfect fit.
From the information above we can see that the neural network is predicting very well the entire range of residuary resistance data. Indeed, the a, b and R2 values are very close to 0, 1 and 1, respectively. The neural network is now ready to estimate the residuary resistance of sailing yachts with satisfactory quality over the same range of data.
The neural network is now ready to predict resistances for geometries and velocities that it has never seen. For that, we can use some neural network tasks.
Directional outputs plot the resistance as a function of a given input, for all the other fixed to given values. The next figures show the directional inputs dialog and the corresponding directional output. Here we want to see how the resistance varies with the center of buoyancy for the following variables fixed:
The explicit expression for the residuary resistance model obtained by the neural network is listed below.