# Predict superconductors critical temperature using Neural Designer.

Superconductivity has been the focus of enormous research effort since its discovery more than a century ago.

Yet, some features of this unique phenomenon remain poorly understood; prime among these is the connection between superconductivity and chemical/structural properties of materials.

To bridge the gap, several machine learning schemes are developed herein to model the critical temperatures (Tc) of superconductors

Performance optimization can be applied to understand the curious behavior of these materials.

For this study we have gathered a large data set of the chemical properties of 21263 superconductors.

We will predict the critical temperature of superconductors with different independent variables.

### Contents:

1. Application type.
2. Data set.
3. Neural network.
4. Training strategy.
5. Model selection.
6. Testing analysis.
7. Model deployment.

This example is solved with Neural Designer. To follow it step by step, you can use the free trial.

## 1. Application type

This is an approximation project since the variable to be predicted is continuous (Critical Temperature).

In this study, we take an entirely data-driven approach to create a statistical model that predicts Tc based on its chemical formula.

## 2. Data set

The first step is to prepare the data set, which is the source of information for the approximation problem. It is composed of:

• Data source.
• Variables.
• Instances.

The file superconductor.csv contains the data for this example. Here the number of variables (columns) is 82, and the number of instances (rows) is 21263.

In that way, this problem has the following variables:

• atomic_mass, total proton and neutron rest masses, in Atomic Mass Units (AMU).
• fie, First Ionization Energy, energy required to remove a valence electron, in kilo-Joules per mole (kJ/mol).
• atomic_radius, calculated atomic radius, in picometer (pm).
• density, density at standard temperature and pressure, in kilograms per meters cubed (kg/m3).
• electron_affinity, energy required to add an electron to a neutral atom, in kilo-Joules per mole (kJ/mol).
• fusion_heat, energy to change from solid to liquid without temperature change, in kilo-Joules per mole (kJ/mol).
• thermal_conductivity, thermal conductivity coefficient k, in watts per meter-kelvin (W/(m × K)).
• valence, typical number of chemical bonds formed by the element, no units.
• critical_temp, superconductor critical temperature, in Kelvin.

These are the main variables of this study. They correspond to chemical properties of each compound in the following dataset; chemical_compounds.csv.

Statistics of each variable are included such as: mean, weighted mean, geometric mean, weighted geometric mean, entropy, weighted entropy, standard, weighted standard, range and weighted range.

The ratios of the elements in the material are used to define features:

$$p_{i}=\frac{j}{\sum_{i=1}^{n} j}$$

Where $j$ is the proportion of an element in the compound

The fractions of total thermal conductivities are used as well:

$$w_{i}=\frac{t_{i}}{\sum_{i=1}^{n} t_{i}}$$

Where $t_{i}$ are the thermal conductivity coefficients.

We will also need intermediate values for calculating features:

$$A_{i}=\frac{p_{i}w_{i}}{\sum_{i=1}^{n} p_{i}w_{i}}$$

The next table sumarizes the procedure for feature extraction from material's chemical formula.

Feature & description Formula
Mean $$\mu=\sum_{i=1}^{n} \frac{t_{i}}{i}$$

Weighted mean $$\nu=\sum_{i=1}^{n} p_{i} t_{i}$$
Geometric mean $$\sqrt{\sum_{i=1}^{n} t_{i}}$$
Weighted geometric mean $$\sum_{i=1}^{n} t_{i}^{p_{i}}$$
Entropy $$-\sum_{i=1}^{n} w_{i} ln w_{i}$$
Weighted entropy $$-\sum_{i=1}^{n} A_{i} ln A_{i}$$
Range $$t_{(max)} - t_{(min)}$$
Weighted range $$p(t_{(max)})t_{(max)} - p(t_{(min)})t_{(min)}$$
Standard deviation $$\left[\frac{1}{2}\left(\sum_{i=1}^{n} (t_{i}-\mu)^{2}\right)\right]^{\frac{1}{2}}$$
Weighted standard deviation $$\left[\sum_{i=1}^{n} p_{i}(t_{i}-\nu)^{2}\right]^{\frac{1}{2}}$$

For instance, for the chemical compound Re7Zr1 with these Rhenium and Zirconium's thermal conductivity coefficients: $t_{1} = 48\,\,W/(mK)$ and $t_{2} = 23\,\,W/(mK)$, respectively.

We can calculate features like the weighted geometric mean and obtain a value of $43.21$

They are divided at random into training, selection, and testing subsets, containing 60%, 20%, and 20% of the instances, respectively. More specifically, 12759 samples are used here for training, 4252 for validation, and 4252 for testing.

Once all the data set information has been set, we will perform some analytics to check the quality of the data.

For instance, we can calculate the data distribution. The next figure depicts the histogram for the target variable.

As we can see in the graph plotted, there are more chemical compounds with low critical temperature.

This could be explainable, because it is not easy to find a superconductor with a relative high critical temperature. To find superconductor properties, as current conductivity with zero resistance, we have to reduce a lot the material temperature.

The next figure depicts inputs-targets correlations. This might help us to see the influence of the different inputs on the critical temperature.

As there are so many input variables, the chart shows the top 20.

We can also plot a scatter chart with the critical temperature versus the weighted mean valence.

As we can see, the critical temperature decreases when we increase the weighted mean valence logarithmicly.

## 3. Neural network

The neural network will output the critical temperature as a function of different chemical properties.

For this approximation example, the neural network is composed of:

• Scaling layer.
• Perceptron layers.
• Unscaling layer.

The scaling layer transforms the original inputs to normalized values. Here the Minimum-Maximum deviation scaling method is set so that the input values have minimum of -1 and maximum of +1.

Here two perceptron layers are added to the neural network. This number of layers is enough for most applications. The first layer has 81 inputs and 3 neurons. The second layer has 3 inputs and 1 neuron.

The unscaling layer transforms the normalized values from the neural network into the original outputs. Here the Minimum-Maximum deviation scaling method mean and standard deviation unscaling method will also be used.

## 4. Training strategy

The next step is to select an appropriate training strategy, which defines what the neural network will learn. A general training strategy is composed of two concepts:

• A loss index.
• An optimization algorithm.

The loss index chosen is the normalized squared error with L2 regularization. This loss index is the default in approximation applications.

The optimization algorithm chosen is the quasi-Newton method. This optimization algorithm is the default for medium-sized applications like this one.

Once the strategy has been set, we can train the neural network. The following chart shows how the training (blue) and selection (orange) errors decrease with the training epoch during the training process.

The most important training result is the final selection error. Indeed, this is a measure of the generalization capabilities of the neural network. Here the final selection error is selection error = 0.178 NSE.

## 5. Model selection

The objective of model selection is to find the network architecture with the best generalization properties. That is, we want to improve the final selection error obtained before (0.178 NSE).

The best selection error is achieved by using a model whose complexity is the most appropriate to produce an adequate fit of the data. Order selection algorithms are responsible for find the optimal number of perceptrons in the neural network.

The final selection error takes a minimum value at some point. Here, the optimal number of neurons is 10, which corresponds to a selection error of 0.164.

The above chart shows the error history for the different subsets during the growing neurons selection process. The blue line represents the training error and the yellow line symbolizes the selection error.

## 6. Testing analysis

The objective of the testing analysis is to validate the generalization performance of the trained neural network. The testing compares the values provided by this technique to the observed values.

A standard testing technique in approximation problems is to perform a linear regression analysis between the predicted and the real values, using an independent testing set. The next figure illustrates a graphical output provided by this testing analysis.

From the above chart, we can see that the neural network is predicting well the entire range of the critical temperature data. The correlation value is R2 = 0.911, which is very close to 1.

## 7. Model deployment

The model is now ready to estimate the critical temperature of a certain chemical compound.

We can plot a directional output of the neural network to see how the emissions vary with a given input, for all other inputs fixed. The next plot shows the critical temperature as a function of the geometric mean valence, through the following point:

For this study is important to mention other useful task of the Model Deployment Tool, we are referring to: Calculate Outputs.

We could think about creating a semiconductor with specific chemical quantities, with this tool we will be able to select especially the inputs and calculate the optimal superconductor critical temperature for the purpose we want.

The superconductor.py contains the Python code to calculate a compound critical temperature.