A Chinese automobile company aspires to enter the US market by setting up their manufacturing unit there and producing cars locally to give competition to their US and European counterparts.
They want to understand the factors affecting the pricing of cars in the American market, since those may be very different from the Chinese market. The company wants to know which variables are significant in predicting the price of a car and how well those variables describe the price based on various market surveys.
Performance optimization can be applied to understand the behavior of the american car market.
For this study we have gathered a large data set of different types of cars across de american market.
We are required to model the price of cars with the available independent variables. It will be used by the company management to understand how exactly the prices vary with the independent variables. They can accordingly manipulate the design of the cars, the business strategy etc. to meet certain price levels. Further, the model will be a good way for management to understand the pricing dynamics of a new market.
This example is solved with Neural Designer. To follow it step by step, you can use the free trial.
This is an approximation project since the variable to be predicted is continuous (car price).
The fundamental goal here is to model the pricing of cars as a function of several car features and different types of engines.
The first step is to prepare the data set, which is the source of information for the approximation problem. It is composed of:
The file car_price_assignment.csv contains the data for this example. Here the number of variables (columns) is 26, and the number of instances (rows) is 205.
In that way, this problem has the 25 following variables:
All the variables that appear on the study are inputs, except for 'fuel_system', 'car_brand' and 'car_name' that have to stay as "unused", and 'price' which is the output that we want to extract from this machine learning study. Moreover, we realize that Neural Designer left the first variable 'car_id' out of the total number of variables because it does not have an useful value to this study.
They are divided randomly into training, selection, and testing subsets, containing 60%, 20%, and 20% of the instances, respectively. More specifically, 123 samples are used here for training, 41 for validation, and 41 for testing.
Once all the data set information has been set, we will perform some analytics to check the quality of the data.
For instance, we can calculate the data distribution. The next figure depicts the histogram for the target variable.
As we can see in the diagram, the car price has a normal distribution, because what we excpect is the american customers to buy cars at a low-medium range of prices, and only a few percent of the american population is able to buy expensive cars as the median personal income of the americans is not extremely high.
The next figure depicts inputs-targets correlations. This might help us to see the influence of the different inputs on the final price.
We realized that there are certain instances that have a very low correlation with our final target, in order to show more conclusive results we can exclude some of that variables of the study by clicking on 'Unuse uncorrelated variables' on the Task Manager window, and inserting a minimum correlation value of 0.01 (which is the lower value we can write), for example.
The above chart shows that a few instances have an important dependency on the car price. As we can see, curb weight, engine size and horse power affect positively to the price; that means the bigger the engine size, the more expensive the car is, for example. On the other hand there are instances (city and highway miles per gallon consumption) that have an important negative dependency on the price, the less the car consumes, the higher the price is.
We can also plot a scatter chart with the price versus the horse power.
In general, the more horse power, the more price. However, the price depends on all the inputs at the same time.
The neural network will output the closing price as a function of all the different car features shown previously.
For this approximation example, the neural network is composed of:
The scaling layer transforms the original inputs to normalized values. Here the mean and standard deviation scaling method is set so that the input values have a mean of 0 and a standard deviation of 1.
Here two perceptron layers are added to the neural network. This number of layers is enough for most applications. The first layer has 15 inputs and 3 neurons. The second layer has 3 inputs and 1 neuron.
The unscaling layer transforms the normalized values from the neural network into the original outputs. Here the mean and standard deviation unscaling method will also be used.
The next figure shows the resulting network architecture.
The next step is to select an appropriate training strategy, which defines what the neural network will learn. A general training strategy is composed of two concepts:
The loss index chosen is the normalized squared error with L1 regularization. Although the default loss index for approximation problems includes L2 regularization, in this case we obtain a lower selection error with L1 regularization.
The optimization algorithm chosen is the quasi-Newton method. This optimization algorithm is the default for medium-sized applications like this one.
Once the strategy has been set, we can train the neural network. The following chart shows how the training (blue) and selection (orange) errors decrease with the training epoch during the training process.
The most important training result is the final selection error. Indeed, this is a measure of the generalization capabilities of the neural network. Here the final selection error is: Selection error = 0.109 NSE.
The objective of model selection is to find the network architecture with the best generalization properties. That is, we want to improve the final selection error obtained before (0.209 NSE).
The best selection error is achieved by using a model whose complexity is the most appropriate to produce an adequate fit of the data. Order selection algorithms are responsible for find the optimal number of perceptrons in the neural network.
As we can see, the final training error always decreases with the number of neurons. However, the final selection error takes a minimum value at some point. Here, the optimal number of neurons is 8, which corresponds to a selection error of 0.0974.
The following figure shows the optimal network architecture for this application.
The objective of the testing analysis is to validate the generalization performance of the trained neural network. The testing compares the values provided by this technique to the observed values.
A standard testing technique in approximation problems is to perform a linear regression analysis between the predicted and the real values, using an independent testing set. The next figure illustrates a graphical output provided by this testing analysis.
From the above chart, we can see that the neural network is predicting well the entire range of car prices data. The correlation value is R2 = 0.937, which indicates that the model has a reliable prediction capability.
The model is now ready to estimate the price of a certain car with satisfactory quality over the same data range.
We can plot a directional output of the neural network to see how the price varies with a given input, for all other inputs fixed. The next plot shows the car price as a function of the engine size, through the following point:
The car_price.py contains the Python code for the car price.