In this example an empirical model for the residuary resistance of sailing yachts as a function of hull geometry coefficients and the Froude number is constructed by means of a neural network.

The Delft data set comprises 308 full-scale experiments, which were performed at the Delft Ship Hydromechanics Laboratory. These experiments include 22 different hull forms, derived from a parent form closely related to the `Standfast 43' designed by Frans Maas.

Variations concern longitudinal position of the center of buoyancy, prismatic coefficient, length-displacement ratio, beam-draught ratio, and length-beam ratio. For every hull form 14 different values for the Froude number ranging from 0.125 to 0.450 are considered. As it has been said, the measured variable is the residuary resistance per unit weight of displacement.

This is an approximation project, since the variable to be predicted is continuous (residuary resistance).

The basic goal here is to model the residuary resistance of a yacht , as a function of its geometry and speed.

The first step is to prepare the data set, which is the source of information for the approximation problem. It contains the following three concepts:

- Data source.
- Variables.
- Instances.

The file yacht_hydrodynamics.csv contains the data for this example. Here the number of instances (rows) is 308, and the number of variables (columns) is 7.

The data set contains the following variables:

**center_of_buoyancy**: Longitudinal position of the center of buoyancy, dimensionless. It is an input.**prismatic_coefficient**: Prismatic coefficient, dimensionless. It is an input.**length_displacement**: Length-displacement ratio, dimensionless. It is an input.**beam_draught_ratio**: Beam-draught ratio, dimensionless. It is an input.**length_beam_ratio**: Length-beam ratio, dimensionless. It is an input.**froude_number**: Froude number, dimensionless. It is an input.**resistance**: Residuary resistance per unit weight of displacement, dimensionless. It is the only target in this example.

Note that the variables use (Input, Target or Unused) must be selected carefully. Any application must have one or more inputs and one or more targets.

Finally, the instances divided into a training, a validation and a testing subsets. Here the instances are splitted at random with ratios 0.6, 0.2 and 0.2. More specifically, 186 instances are used for training, 61 for validation and 61 for testing.

Once the data set page has been edited we are ready to run a few related tasks. With that, we check that the provided information is of good quality.

We can calculate the data distributions and draws a histogram for each variable to see how they are distributed. The following figure shows the histogram for the resistance data, which is the only target.

As we can see, most of the data is concentrated at low resistance values.

The next figure shows the correlations that the input variables have with the target variable.

The second step is to set the neural network stuff. For approximation project types, it is usually composed by:

- Scaling layer.
- Perceptron layers.
- Unscaling layer.
- Bounding layer.

The scaling layer section contains the statistics on the inputs calculated from the data file and the method for scaling the input variables. Here the minimum and maximum method has been set. Nevertheless, the mean and standard deviation method would produce very similar results.

A sigmoid hidden layer and a linear output layer of perceptrons will be used in this problem (this is the default in approximation). It must have 6 inputs, and 1 output neuron. While the numbers of inputs and output neurons are constrained by the problem, the number of neurons in the hidden layer is a design variable. Here we use 6 neurons in the hidden layer, which yields to 49 parameters. Finally, all the biases and synaptic weights in the neural network are initialized at random.

The unscaling layer contains the statistics on the outputs calculated from the data file and the method for unscaling the output variables. Here the minimum and maximum method will also be used.

The neural network for this example can be represented as the following diagram:

The function above is parameterized by all the biases and synaptic weights in the neural network, i.e, 49 parameters.

The next step is to select an appropriate training strategy, which defines what the neural network will learn. A general training strategy for approximation is composed of two terms:

- A loss index.
- An optimization algorithm.

The loss index chosen for this problem is the normalized squared error between the outputs from the neural network and the targets in the data set. On the other hand, no regularization will be used for this application.

The selected optimization algorithm is the quasi-Newton method.

The most important training result is the final selection error.
Indeed, this a measure of the generalization capabilities of the neural network.
Here the final selection error is **selection error = 0.007 NSE**.

model selection algorithms are used to improve the generalization performance of the neural network.

As the selection error that we have achieved so far is very small (0.007 NSE), this kind of algorithms are not necessary here.

The next step is to perform a testing analysis to measure the generalization performance of the neural network. Here we compare the values provided by this technique to the actually observed values. Note that, since testing has not settings, there is not a testing analysis page in Neural Designer.

A common testing technique for the neural network model is to perform a linear regression analysis between the predicted and their corresponding experimental residuary resistance values, using an independent testing set. The following figure illustrates a graphical output provided by this testing analysis.

The solid line indicates the best linear fit. The dashed line with R2 = 1 would indicate perfect fit. From the information above we can see that the neural network is predicting very well the entire range of residuary resistance data. Indeed R2 value is very close to 1.

The neural network is now ready to estimate the residuary resistance of sailing yachts with satisfactory quality over the same range of data.

The neural network is now ready to predict resistances for geometries and velocities that it has never seen. For that, we can use some neural network tasks.

Directional outputs plot the resistance as a function of a given input, for all the other fixed to given values. The next figures show the directional inputs dialog and the corresponding directional output. Here we want to see how the resistance varies with the center of buoyancy for the following variables fixed:

- Prismatic coefficient: 0.6
- Length displacement: 4.34
- Beam daught ratio: 4.23
- Length beam ratio: 2.73
- Froude number: 0.45

The explicit expression for the residuary resistance model obtained by the neural network is listed below.

scaled_center_of_buoyancy = 2*(center_of_buoyancy+5)/(0+5)-1; scaled_prismatic_coefficient = (prismatic_coefficient-0.564136)/0.02329; scaled_length_displacement = 2*(length_displacement-4.34)/(5.14-4.34)-1; scaled_beam_draught_ratio = (beam_draught_ratio-3.93682)/0.548193; scaled_length_beam_ratio = 2*(length_beam_ratio-2.73)/(3.64-2.73)-1; scaled_froude_number = 2*(froude_number-0.125)/(0.45-0.125)-1; y_1_1 = tanh (-2.15185+ (scaled_center_of_buoyancy*0.0336499)+ (scaled_prismatic_coefficient*0.0022637)+ (scaled_length_displacement*0.203965)+ (scaled_beam_draught_ratio*-0.126716)+ (scaled_length_beam_ratio*-0.239965)+ (scaled_froude_number*1.89048)); scaled_resistance = (1.17499+ (y_1_1*2.15355)); resistance = (0.5*(scaled_resistance+1.0)*(62.42-0.01)+0.01);

The above expression could be exported elsewhere to design yachts according to this predictions.

- UCI Machine Learning Repository. Yacht Hydrodynamics Data Set.
- Ortigosa, I., Lopez, R., & Garcia, J. (2007). A neural networks approach to residuary resistance of sailing yachts prediction. In Proceedings of the international conference on marine engineering MARINE (Vol. 2007, p. 250).