Aircraft logo

Airfoil self-noise prediction

By Roberto Lopez, Artelnics.

The noise generated by an aircraft is an efficiency and environmental matter for the aerospace industry. An important component of the total airframe noise is the airfoil self-noise, which is due to the interaction between an airfoil blade and the turbulence produced in its own boundary layer and near wake. The next figure illustrates the noise generated by an aircraft.

Aircraft landing close to the buildings
Aircraft noise.

Contents:

  1. Data set
  2. Neural network
  3. Loss index
  4. Training strategy
  5. Testing analysis
  6. Model deployment

1. Data set

The first step is to prepare the data set, which is the source of information for the approximation problem.

The self-noise data set used in this example was processed by NASA, and so it is referred here to as the NASA data set. It was obtained from a series of aerodynamic and acoustic tests of two and three-dimensional airfoil blade sections conducted in an anechoic wind tunnel. The NASA data set comprises different size NACA 0012 airfoils at various wind tunnel speeds and angles of attack. The span of the airfoil and the observer position were the same in all of the experiments.

The file airfoilselfnoise.dat contains the data for this example. The input to Neural Designer is a data set, which can have different formats (CSV, XLS, etc.). Decimal marks should be points, not commas. A preview of the contents of the airfoilselfnoise.dat file are listed below. Here the number of instances (rows) is 1503, and the number of variables (columns) is 6.

Airfoil self-noise dataset picture
Airfoil self-noise dataset.

The first step is to edit the data set information. It is composed of:

  • Data file.
  • Variables information.
  • Instances information.
  • Missing values information

The following figure shows the data set page in Neural Designer.

Data set page screenshot
Data set page.

In that way, this problem has the following variables:

  1. frequency, in Hertzs, used as input.
  2. angle_of_attack, in degrees, used as input.
  3. chord_length, in meters, used as input.
  4. free_stream_velocity, in meters per second, used as input.
  5. suction_side_displacement_thickness, in meters, used as input.
  6. scaled_sound_pressure_level, in decibels, used as target.

The NASA data set contains 1503 instances. For that purpose, the data is divided at random into training, validation and testing subsets, containing 60%, 20% and 20% of the instances, respectively. More specifically, 753 samples are used here for training, 375 for validation and 375 for testing. Note that, as the number of instances is big (more than 1000), the instances information table is not shown in the data set page. Indeed, that table would need too much space and memory.

Once all the data set information has been set, we are ready to run some related tasks, in order to check for the quality of the data.

The "Calculate data statistics" task calculates the minimums, maximums, mean and standard deviation of all variables.

Data statistics table
Data statistics.

On the other hand, the "Calculate data histograms" task shows how the data is distributed. The next figure depicts the histogram for the frequency. As we can see, most of the NASA data set contains noise at low frequencies.

Frequency histogram
Frequency histogram.

The next figure is the histogram for the only target variable, the sound level. We can see that it is well distributed.

Frequency histogram
Sound level histogram.

2. Neural network

For this approximation example, the deep architecture page is composed by:

  • Inputs.
  • Scaling layer.
  • Neural network.
  • Unscaling layer.
  • Outputs.

The next figure shows the neural network page in Neural Designer.

Neural network page screenshot
Neural network page.

The scaling layer section contains the statistics on the inputs calculated from the data file and the method for scaling the input variables. Here the minimum and maximum method has been set. Nevertheless, the mean and standard deviation method would produce very similar results.

Here a neural network with a sigmoid hidden layer and a linear output layer is used. This neural network must have 5 inputs and 1 output neuron. The number of hidden neurons has been chosen to be 5. The resulting number of parameters in this neural network is 36.

The unscaling layer contains the statistics on the outputs calculated from the data file and the method for unscaling the output variables. Here the minimum and maximum method will also be used.

The outputs from the neural network are those variables set as target in the data set page:

  1. Scaled sound pressure level, in decibels.

The neural network for this example can be plotted as a graph as follows.

Neural network graph
Neural network graph.

This neural network defines a function of the form

				sound_level = function(frequency, angle_of_attack, chord_length, free_stream_velocity, displacement_thickness).
				

That function is parameterized by the biases and synaptic weighs of the neural network.

3. Loss index

  1. Error term.
  2. Regularization term.

The next figure shows the loss index page here.

Loss index page screenshot
Loss index page.

The objective term is to be the normalized squared error. It divides the squared error between the outputs from the neural network and the targets in the data set by a normalization coefficient. If the normalized squared error has a value of unity then the neural network is predicting the data 'in the mean', while a value of zero means perfect prediction of the data. This objective term does not have any parameters to set.

The neural parameters norm is used as regularization term. It is applied to control the complexity of the neural network by reducing the value of the parameters. The weight of this regularization term in the loss index is 0.001.

The learning problem can be stated as to find a neural network which minimizes the loss index, i.e., a neural network that fits the data set (objective) and that does not oscillate (regularization).

4. Training strategy

The fourth step is to edit the training settings. The next screenshot shows the training strategy page for this example.

Training strategy page screenshot
Training strategy page.

As we can see, the training strategy chosen is the quasi-Newton method.

The "Perform training" task trains the neural network. The following chart shows how the performance decreases with the iterations during the training process. The initial value is 9.47968, and the final value after 93 iterations is 0.330033.

Loss index history plot picture
Loss index history

The next table shows the training results for this application. Here the final parameters norm is not very big, the final performance and generalization performance are small, and the final gradient norm is next to zero.

Training strategy page screenshot
Training results.

5. Testing analysis

The next step is to test the generalization performance of the trained neural network. Testing compares the values provided by this technique to the actually observed values.

A possible testing technique for the neural network model is to perform a linear regression analysis between the predicted and their corresponding experimental residuary resistance values, using and independent testing set. This analysis leads to a line y = a + bx with a correlation coefficient R2. In this way, a perfect prediction would give a = 0, b = 1 and R2 = 1. The "Perform linear regression analysis" task does that. The following table shows the three parameters given by this testing analysis.

Linear regression parameters table
Linear regression parameters.

The next figure illustrates a graphical output provided by this testing analysis. The predicted residuary resistances are plotted versus the experimental ones as open circles. The solid line indicates the best linear fit. The dashed line with R2 = 1 would indicate perfect fit.

Linear regression plot
Linear regression plot.

From the table and the figure above we can see that the neural network is predicting well the entire range of sound level data. Indeed, the a, b and R2 values are close to 0, 1 and 1, respectively.

6. Model deployment

The model is now ready to estimate the self-noise of airfoils with satisfactory quality over the same range of data.

The "Calculate output" task calculates the output value for a given set of input values.

The "Plot directional output" task plots the sound level as a function of a given input, for all other inputs fixed. The next plot shows the output Sound level as a function of the input Frequency, through the point (*\10\0.15\50\0.03). The x and y axes are defined by the range of the variables Frequency and Sound level, respectively. Note that some directional outputs fall outside the range of Sound level, and therefore they are not plotted.

Directional input dialog
Directional input dialog.

Directional output
Directional output.

The explicit expression for the residuary resistance model obtained by the neural network is obtained by clicking the "Write expression" neural networtk task. It gives the following results:


				scaled_Frequency=2*(Frequency-
				200)/(20000-
				200)-1;

				scaled_Angle of attack=2*(Angle of attack-
				0)/(22.2-
				0)-1;

				scaled_Chord length=2*(Chord length-
				0.0254)/(0.3048-
				0.0254)-1;

				scaled_Free-stream velocity=2
				*(Free-stream velocity-
				31.7)/(71.3-
				31.7)-1;

				scaled_Displacement thickness=2
				*(Displacement thickness-
				0.000400682)/(0.0584113
				-0.000400682)-1;

				y_1_1=tanh(1.65706+
				0.767221*scaled_Frequency-
				1.17471*scaled_Angle of attack-
				0.763467*scaled_Chord length
				+0.0847516
				*scaled_Free-stream velocity+
				1.96769*scaled_Displacement thickness);

				y_1_2=tanh(7.07135+
				5.87393*scaled_Frequency-
				0.24156*scaled_Angle of attack+
				0.198675*scaled_Chord length
				-0.0456785
				*scaled_Free-stream velocity+
				0.896567*scaled_Displacement thickness);

				y_1_3=tanh(0.921597+
				0.43041*scaled_Frequency+
				1.03302*scaled_Angle of attack+
				0.78585*scaled_Chord length
				-0.144467
				*scaled_Free-stream velocity-
				0.232269*scaled_Displacement thickness);

				y_1_4=tanh(-8.26269
				-8.27638*scaled_Frequency+
				0.612482*scaled_Angle of attack+
				0.0219524*scaled_Chord length
				+0.0159613
				*scaled_Free-stream velocity-1.06533
				*scaled_Displacement thickness);

				y_1_5=tanh(5.47269+
				6.8802*scaled_Frequency-
				4.74494*scaled_Angle of attack-
				0.125744*scaled_Chord length
				-0.290428
				*scaled_Free-stream velocity+
				1.52628*scaled_Displacement thickness);

				scaled_Sound level=(-2.92244
				-0.627827*y_1_1+
				4.7071*y_1_2-
				1.16459*y_1_3+
				1.0985*y_1_4-
				0.290131*y_1_5);

				Sound level=0.5
				*(scaled_Sound level+
				1.0)*(140.987
				-103.38)+
				103.38;