Yacht logo

Yacht hydrodynamics modeling

By Roberto Lopez, Artelnics.

In this tutorial an empirical model for the residuary resistance of sailing yachts as a function of hull geometry coefficients and the Froude number is constructed by means of a neural network. The data for this problem has been taken from the UCI Machine Learning Repository.

Prediction of residuary resistance of sailing yachts at the initial design stage is of a great value for evaluating the ship's performance and for estimating the required propulsive power. Essential inputs include the basic hull dimensions and the boat velocity. The next figure illustrates this example.

Sailing yachts
Sailing yachts.

Contents:

  1. Data set
  2. Neural network
  3. Loss index
  4. Training strategy
  5. Testing analysis
  6. Model deployment

1. Data set

The Delft data set comprises 308 full-scale experiments, which were performed at the Delft Ship Hydromechanics Laboratory. These experiments include 22 different hull forms, derived from a parent form closely related to the `Standfast 43' designed by Frans Maas. Variations concern longitudinal position of the center of buoyancy, prismatic coefficient, length-displacement ratio, beam-draught ratio, and length-beam ratio. For every hull form 14 different values for the Froude number ranging from 0.125 to 0.450 are considered. As it has been said, the measured variable is the residuary resistance per unit weight of displacement.

In this example a neural network is trained with the Delf data set to provide an estimation of the residuary resistance per unit weight of displacement as a function of hull geometry coefficients and the Froude number.

The first step is to prepare the data set, which is the source of information for the approximation problem. The file yachthydrodynamics.dat contains the data for this example. The input to Neural Designer is a data set, which can have different formats (CSV, XLS, etc.). Decimal marks should be points, not commas. A preview of the contents of the yachthydrodynamics.dat file are listed below. Here the number of instances (rows) is 308, and the number of variables (columns) is 7.

Yacht hydrodynamics dataset picture
Yacht hydrodynamics dataset.

The next figure shows the data set page in Neural Designer. The following three concepts are important here:

  • Data file.
  • Variables information.
  • Instances information.
  • Missing values information.

Data set page
Data set page.

To set a data file click on the "Import data file" button or "import database" button (depending on the type of data), and select it through the file dialog which appears. After upload data process, the first, second and last instances will be shown in the data preview table. Also, the numbers of variables and instances are depicted below right. In this example the yachthydrodynamics.dat file has been already set.

Then the information about the variables is edited. That includes names, units, descriptions and uses:

  1. center_of_buoyancy: Longitudinal position of the center of buoyancy, adimensional. It is an input.
  2. prismatic_coefficient: Prismatic coefficient, adimensional. It is an input.
  3. length_displacement: Length-displacement ratio, adimensional. It is an input.
  4. beam_draught_ratio: Beam-draught ratio, adimensional. It is an input.
  5. length_beam_ratio: Length-beam ratio, adimensional. It is an input.
  6. froude_number: Froude number, adimensional. It is an input.
  7. resistance: Residuary resistance per unit weight of displacement, adimensional. It is the only target in this example.

The variables information is edited in the corresponding table. It is recommended to write the names without spaces. Please select the variables use (Input, Target or Unused) carefully. Any application must have one or more inputs and one or more targets. That numbers will be depicted below right.

Finally, the data is divided into a training, a validation and a testing subsets. Here the instances have been splitting at random with ratios 0.6, 0.2 and 0.2. More specifically, 186 instances will be used for training, 61 for validation and 61 for testing.

Once the data set page has been edited we are ready to run a few related tasks. With that, we check again the provided information and make sure that the data has good quality. Some data set tasks also perform minor adjustments to the variables information or the instances information sections.

The "Report data set" task simply transfers to Neural Viewer the information contained in the Data set page of Neural Editor. The following picture shows how the results from this task are displayed in Neural Viewer.

Report data set task results
Report data set task results.

The "Calculate data statistics" task draws a table with the minimums, maximums, means and standard deviations of all variables in the data set. The following figure shows the data statistics for this example.

Data statistics
Data statistics.

The "Calculate data histograms" task draws a histogram for each variable to see how they are distributed. The following figure shows the histogram for the center of buoyancy data, which is an input. As we can see, this variable is not well distributed.

Histogram for the center of buoyancy variable
Histogram for the center of buoyancy variable.

The following figure shows the histogram for the resistance data, which is the only target. As we can see, most of the data is concentrated at low resistance values.

Histogram for the resistance variable.

2. Neural network

The second step is to set the neural network stuff. For approximation project types, the neural network page is composed by:

  • Inputs.
  • Scaling layer.
  • Learning layers.
  • Unscaling layer.
  • Outputs.

The following figure shows the neural network page in Neural Designer.

Neural network page screenshot
Neural network page.

The scaling layer section contains the statistics on the inputs calculated from the data file and the method for scaling the input variables. Here the minimum and maximum method has been set. Nevertheless, the mean and standard deviation method would produce very similar results.

A multilayer perceptron with a sigmoid hidden layer and a linear output layer is used in this problem (this is the default in approximation). It must have 6 inputs, and 1 output neuron. While the numbers of inputs and output neurons are constrained by the problem, the number of neurons in the hidden layer is a design variable. Here we will use 6 neurons in the hidden layer, which yields to 49 parameters. Finally, all the biases and synaptic weights in the neural network are initialized at random.

The unscaling layer contains the statistics on the outputs calculated from the data file and the method for unscaling the output variables. Here the minimum and maximum method will also be used.

The outputs from the neural network are those variables set as target in the data set page:

  1. Residuary resistance per unit weight of displacement, adimensional.

The neural network for this example can be plotted as a graph as follows.

Neural network graph
Neural network graph.

This neural network defines a function of the form


				resistance = function(center_of_buoyancy, prismatic_coefficient, length_displacement_ratio, beam_draught_ratio, length_beam_ratio, froude_number)

				

The function above is parameterized by all the biases and synaptic weights in the neural network, i.e, 49 parameters.

3. Loss index

The third step is to select an appropriate loss index, which defines what the neural network will learn. A general loss index for approximation is composed of two terms:

  1. An error term.
  2. A regularization term.

The following figure shows the loss index page in Neural Designer.

Loss index page screenshot
Loss index page.

The objective chosen for this problem is the normalized squared error between the outputs from the neural network and the target values in the data set. On the other hand, no regularization will be used for this application.

4. Training strategy

The fourth step is to edit the training strategy settings. The next screenshot shows the training strategy page for this example.

Training strategy page screenshot
Training strategy page.

The selected algorithm for solving the reduced function optimization problem is a quasi-Newton method with BFGS train direction and Brent's method training rate.

We need to run the "Perform training" task to train the neural network. The performance history is shown in the following figure.

Loss history
Loss history.

The next table shows the training results.

Training results
Training results.

5. Testing analysis

The last step is to test the generalization performance of the trained neural network. Here we compare the values provided by this technique to the actually observed values. Note that, since testing has not settings, there is not a testing analysis page in Neural Designer.

It is convenient to explore the errors made by the neural network on single testing instances. The absolute error is the difference between some target and its corresponding output. The relative error is the absolute error divided by the range of the variable. The percentage error is the relative error multiplied by 100.

The error data statistics measure the minimums, maximums, means and standard deviations of the errors between the neural network and the testing instances in the data set. They provide a valuable tool for testing the quality of a model. The next figure shows the error data statistics for the resistance.

Error data statistics
Error data statistics.

The error data histograms show how the errors from the neural network on the testing instances are distributed. In general, a normal distribution for each output variable is expected here. The following chart depicts the errors distribution for the resistance.

Target class distribution table
Histogram of error data for the resistance.

A possible testing technique for the neural network model is to perform a linear regression analysis between the predicted and their corresponding experimental residuary resistance values, using an independent testing set. This analysis leads to a line y = a + bx with a correlation coefficient R2. In this way, a perfect prediction would give a = 0, b = 1 and R2 = 1.

The table below shows the three parameters given by this testing analysis.

Linear regression parameters
Linear regression parameters.

The following figure illustrates a graphical output provided by this testing analysis. The predicted residuary resistances are plotted versus the experimental ones as open circles. The solid line indicates the best linear fit. The dashed line with R2 = 1 would indicate perfect fit.

Linear regression analysis
Linear regression analysis.

From the information above we can see that the neural network is predicting very well the entire range of residuary resistance data. Indeed, the a, b and R2 values are very close to 0, 1 and 1, respectively. The neural network is now ready to estimate the residuary resistance of sailing yachts with satisfactory quality over the same range of data.

6. Model deployment

The neural network is now ready to predict resistances for geometries and velocities that it has never seen. For that, we can use some neural network tasks.

The "Calculate output" task calculates the output value for a given input value. This task opens a dialog to set the input values, see the next figure.

Inputs dialog
Inputs dialog.

It then writes to Neural Viewer a table with that inputs and their corresponding outputs, as it shows the following figure.

Inputs-outputs table
Inputs-outputs table.

The calculate directional output plots the resistance as a function of a given input, for all the other fixed to given values. The next figures show the directional inputs dialog and the corresponding directional output. Here we want to see how the resistance varies with the center of buoyancy for the following variables fixed:

  1. Prismatic coefficient: 0.6
  2. Length displacement: 4.34
  3. Beam daught ratio: 4.23
  4. Length beam ratio: 2.73
  5. Froude number: 0.45

Directional input data
Directional input data.

Directional output
Directional output.

The explicit expression for the residuary resistance model obtained by the neural network is listed below.


				scaled_Center of buoyancy=2*(Center of buoyancy+5)/(0+5)-1;

				scaled_Prismatic coefficient=2*(Prismatic coefficient-0.53)/(0.6-0.53)-1;

				scaled_Length displacement=2*(Length displacement-4.34)/(5.14-4.34)-1;

				scaled_Beam draught ratio=2*(Beam draught ratio-2.81)/(5.35-2.81)-1;

				scaled_Length beam ratio=2*(Length beam ratio-2.73)/(3.64-2.73)-1;

				scaled_Froude number=2*(Froude number-0.125)/(0.45-0.125)-1;

				y_1_1=tanh(-1.28025+0.394105*scaled_Center of buoyancy+0.200675*scaled_Prismatic coefficient+0.037958*scaled_Length displacement
				-0.397632*scaled_Beam draught ratio+0.00898606*scaled_Length beam ratio+0.613385*scaled_Froude number);

				y_1_2=tanh(-2.38037+0.000498914*scaled_Center of buoyancy-0.0959059*scaled_Prismatic coefficient+0.102796*scaled_Length displacement
				-0.202462*scaled_Beam draught ratio-0.160906*scaled_Length beam ratio+2.30188*scaled_Froude number);

				y_1_3=tanh(-0.00400935-0.320055*scaled_Center of buoyancy+0.25657*scaled_Prismatic coefficient+0.136735*scaled_Length displacement
				+0.187143*scaled_Beam draught ratio-0.143879*scaled_Length beam ratio+0.110821*scaled_Froude number);

				y_1_4=tanh(0.486307+0.348819*scaled_Center of buoyancy-0.279504*scaled_Prismatic coefficient-0.192404*scaled_Length displacement
				-0.372327*scaled_Beam draught ratio+0.103346*scaled_Length beam ratio+0.241193*scaled_Froude number);

				y_1_5=tanh(0.348358-0.319544*scaled_Center of buoyancy+0.248311*scaled_Prismatic coefficient+0.301903*scaled_Length displacement
				+0.335708*scaled_Beam draught ratio-0.226905*scaled_Length beam ratio-0.801859*scaled_Froude number);

				y_1_6=tanh(-0.0947738-0.142264*scaled_Center of buoyancy+0.197293*scaled_Prismatic coefficient+0.274411*scaled_Length displacement
				+0.247838*scaled_Beam draught ratio-0.13007*scaled_Length beam ratio-0.900108*scaled_Froude number);

				scaled_Resistance=(1.08323+0.468674*y_1_1+1.65863*y_1_2-0.422513*y_1_3-0.408006*y_1_4+0.517054*y_1_5-0.523758*y_1_6);

				Resistance=0.5*(scaled_Resistance+1.0)*(62.42-0.01)+0.01;