The advance of new technology and electric cars has been taking place recently. Many companies want to assure their electric engines to be the best in the market, speaking in terms of reliability, autonomy for the client and durability.
Therefore, these companies perform tests digitally to prevent any damage to the actual product and to find the best temperature interval the engines work the best in.
Performance optimization can be applied to understand the behavior of the electric motors.
For this study we have gathered a large data set of several sensor data collected from a permanent magnet synchronous motor deployed on a test bench.
Being able to have strong estimators for the rotor and stator temperature helps the automotive industry to improve their motors, reducing power losses and eventually heat build-up.
This example is solved with Neural Designer. To follow it step by step, you can use the free trial.
This is an approximation project since the variable to be predicted is continuous (engine temperature).
The fundamental goal here is to understand how voltage and current affect numerous car features and the temperature of different parts of an electric motor.
The first step is to prepare the data set, which is the source of information for the approximation problem. It is composed of:
The file pmsm_motor_temperature.csv contains the data for this example. Here the number of variables (columns) is 14, and the number of instances (rows) is 107.
In that way, this problem has the following variables:
We have different variables in the study: 'profile_id' should stay as "unused", because is an instance that simply remarks the measurement session. On the contrary, 'ambient', 'coolant', 'u_d', 'u_q', 'i_d', 'i_q', 'u_module' and 'i_module' are inputs. Finally, 'motor_speed', 'torque', 'stator_yoke', stator_tooth' and 'stator_winding' are the targets of this study. Indeed, our main goal is to describe the behavior of the electric motor to prevent overheating. That is why these output variables show the temperature of the internal parts of the engine.
They are divided at random into training, selection, and testing subsets, containing 60%, 20%, and 20% of the instances, respectively. More specifically, 65 samples are used here for training, 21 for validation, and 21 for testing.
Once all the data set information has been set, we will perform some analytics to check the quality of the data.
For instance, we can calculate the data distribution. The next figure depicts the histogram for one of the target variables.
In this diagram, we can see a normal distribution of the stator tooth temperature representing one of the parts of the stator temperature. We could affirm is normal because this output depends on a lot of input variables at the same time and during this experiment the inputs were constantly varying, causing the targets to have this type of distribution.
The next figure depicts inputs-targets correlations. This might help us to see the influence of the different inputs on the motor temperature.
As this machine learning study has various target variables, we will show the correlations diagram of one of them.
The above chart shows that a few instances have an important dependency on the variable 'torque'. As we can see, there is an instance that is highly correlated to this target, which is the case of the input 'i_q'. At first sight, we could have predicted this behavior simply by looking at the data set and realizing the torque is induced by the current, in this case by the quadrature coordinate of the current.
We can also plot a scatter chart with the price versus the horse power.
In general, the more ambient temperature, the more stator winding temperature, logically.
The neural network will output the different motor temperatures as a function of the current, voltage, coolant temperature and ambient temperature.
For this approximation example, the neural network is composed of:
The scaling layer transforms the original inputs to normalized values. Here the mean and standard deviation scaling method is set so that the input values have a mean of 0 and a standard deviation of 1.
Here two perceptron layers are added to the neural network. This number of layers is enough for most applications. The first layer has 8 inputs and 3 neurons. The second layer has 3 inputs and 5 neuron.
The unscaling layer transforms the normalized values from the neural network into the original outputs. Here the mean and standard deviation unscaling method will also be used.
The next figure shows the resulting network architecture.
The next step is to select an appropriate training strategy, which defines what the neural network will learn. A general training strategy is composed of two concepts:
The loss index chosen is the normalized squared error with L2 regularization. This loss index is the default in approximation applications.
The optimization algorithm chosen is the quasi-Newton method. This optimization algorithm is the default for medium-sized applications like this one.
Once the strategy has been set, we can train the neural network. The following chart shows how the training (blue) and selection (orange) errors decrease with the training epoch during the training process.
The most important training result is the final selection error. Indeed, this is a measure of the generalization capabilities of the neural network. Here the final selection error is selection error = 0.083 NSE.
The objective of model selection is to find the network architecture with the best generalization properties. That is, we want to improve the final selection error obtained before (0.083 NSE).
The best selection error is achieved by using a model whose complexity is the most appropriate to produce an adequate fit of the data. Order selection algorithms are responsible for find the optimal number of perceptrons in the neural network.
The final training error always decreases with the number of neurons. However, the final selection error takes a minimum value at some point. Here, the optimal number of neurons is 9, which corresponds to a selection error of 0.0432.
The following figure shows the optimal network architecture for this application.
The objective of the testing analysis is to validate the generalization performance of the trained neural network. The testing compares the values provided by this technique to the observed values.
A standard testing technique in approximation problems is to perform a linear regression analysis between the predicted and the real values, using an independent testing set. The next figure illustrates a graphical output provided by this testing analysis.
From the above chart, we can see that the neural network is predicting well the entire range of temperature data. The correlation value is R2 = 0.990, which indicates that the model has a reliable prediction capability.
The model is now ready to estimate the temperature of the engine components with satisfactory quality over the same data range.
We can plot a directional output of the neural network to see how the targets vary with a given input, for all other inputs fixed. The next plot shows the car price as a function of the frequency, through the following point:
The electric_motor.py contains the Python code for the electric motor temperature Neural Network.