The objective of this study is to predict human wine taste preferences. This can be useful in order to improve the wine production and to support the oenologist wine tasting evaluations. Such model is useful to support the oenologist wine tasting evaluations and improve wine production. Furthermore, similar techniques can help in target marketing by modeling consumer tastes from niche markets.
Fixed Acidity is a fundamental property of wine, imparting sourness and resistance to microbial infection. Volatile acidity refers to the steam distillable acids present in wine, primarily acetic acid but also lactic, formic, butyric, and propionic acids.
The variables used for this proposal are not related to grape type, wine brand or wine selling price, they are only realated to physicochemical tests. The output of the model will give a score between 0 and 10, which defines the wine quality.
The data file contains a total of 1599 rows and 12 columns. The first row in the data file contains the names of the variables and the rest of them represent the instances.For that purpose, the data is divided at random into training, selection and testing subsets, containing 60%, 20% and 20% of the instances, respectively. The following image contains a description of the variables obtained by using the task "Report data set".
As we can see in the next figure, the data set is not well balanced since there is a large amount of quality scores around 6 and much less with values near to 0 or 10.
As we can see, our target variable does not follow a normal distribution. Therefore, we select min max method for unscaling.
The next step to take is to set model selection mode. Loss index page and training page are not used in this example because we are setting them automatically with model selection.
Some data sets have inputs that are redundants and it affects the performance of the neural network. The inputs selection is used to find the optimal subset of inputs for the best performance of the model.
In this example, the inputs selection algorithm selected is the genetic algorithm. It has a population size of 100 individuals in each generation. The remainder parameters take its default values.
The next chart shows the performance history for the different subsets during the genetic algorithm inputs selection process. The blue line represents the training loss, its initial value is 0.580421, and the final value after 100 generations is 0.548342. The red line symbolizes the selection loss, its initial value is 0.633424, and the final value after 100 generations is 0.632917.
he next chart shows the history of the mean of the selection losses in each generation during the genetic algorithm inputs selection process. The initial value is 0.76545, and the final value after 100 generations is 0.634.
Finally, Neural Designer shows the final architecture of the neural network, see the next figure.
A standard method for testing the prediction capabilities is to compare the outputs from the neural network against an independent set of data. The linear regression analysis, performed by the task "Perform linear regression analysis", leads to 3 parameters to each output: intercept, slope and correlation.
For a perfect prediction the intercept would be 0 and the slope would be 1. If the correlation is equal to 1, then there is perfect correlation between the outputs from the neural network and the targets in the testing subset. In this case, the parameters show good results.
Once the model is obtained, Neural Designer provides the user the mathematical expression of it. The next listing shows that result.
The formula from below can be exported to the software tool required by the customer.