In this post, we compare the load capacity of three machine learning platforms: TensorFlow, PyTorch, and Neural Designer for an approximation benchmark.

Capacity refers to the maximum amount of data that a computer program can analyze in data science and machine learning platforms.These platforms are developed by GoogleFacebook, and Artelnics, respectively.

.A graph comparing the data capacity of TensorFlow, Pytorch and Neural Designer. It can be seen that Neural Designer is able to load a dataset x1.8.

As we will see, Neural Designer can load a dataset 1.8 times larger than TensorFlow and PyTorch.

In this article, we provide all the steps that you need to reproduce the results using the free trial of Neural Designer.

Contents:

Introduction

The maximum amount of data a tool can analyze depends on different factors.

Some of the most important factors are the programming language used and the internal workings of memory usage.

The following table summarizes the technical features of these tools that might impact their memory usage.

TensorFlowPyTorchNeural Designer
Written inC++, CUDA, PythonC++, CUDA, PythonC++, CUDA
InterfacePythonPythonGraphical User Interface

Although C++ is at the core of the three platforms, their interfaces differ.

The most common use of TensorFlow and PyTorch is through a Python API.

On the other hand, Neural Designer uses a C++ GUI.

As we will see, an application with a Python interface results in higher memory consumption, which means a lower capacity to load the data.

Benchmark application

To test the capacity of the three platforms, we will attempt to load different Rosenbrock data files while keeping the number of variables constant and varying the number of samples.

The following table shows the correspondence between the size and the number of Rosenbrock samples.

FilenameFloating-point number (x109)Samples numberSize (GB)
Rosenbrock_1000_variables_1000000_samples.csv1100000022
Rosenbrock_1000_variables_2000000_samples.csv2200000044
Rosenbrock_1000_variables_3000000_samples.csv3300000065
Rosenbrock_1000_variables_4000000_samples.csv4400000086
Rosenbrock_1000_variables_5000000_samples.csv55000000107
Rosenbrock_1000_variables_6000000_samples.csv66000000128
Rosenbrock_1000_variables_7000000_samples.csv77000000149
Rosenbrock_1000_variables_8000000_samples.csv88000000171
Rosenbrock_1000_variables_9000000_samples.csv99000000192
Rosenbrock_1000_variables_10000000_samples.csv1010000000213

To create these files, check this article: The Rosenbrock Dataset Suite for benchmarking approximation algorithms and platforms.

The number of samples is the only parameter we change to perform the comparison tests. The other parameters are constant. The next picture shows the benchmarks set up.

Data set
  • Benchmark: Rosenbrock
  • Inputs number: 1000
  • Targets number: 1
  • Samples number: Table
Neural network
  • Layers number: 2
  • Layer 1:
    • -Type: Perceptron (Dense)
    • -Inputs number: 1000
    • -Neurons number: 1000
    • -Activation function: Hyperbolic tangent (tanh)
  • Layer 2:
    • -Type: Perceptron (Dense)
    • -Inputs number: 1000
    • -Neurons number: 1
    • -Activation function: Linear
  • Initialization: Random uniform [-1,1]
Training strategy
  • Loss index:
    • -Error: Mean Squared Error (MSE)
    • -Regularization: None
  • Optimization algorithm:
    • -Algorithm: Adaptive Moment Estimation (Adam)
    • -Batch size: 1000
    • -Maximum epochs: 1000

We run the above benchmark for each platform (TensorFlow, PyTorch, and Neural Designer), increasing the sample size until the memory is exhausted.

We consider a successful test to be one that can load the CSV file and train the neural network.

Reference computer

The next step involves selecting a computer to train neural networks using TensorFlow, PyTorch, and Neural Designer.

For a capacity test, the most crucial feature of the computer is its memory.

We have performed all calculations on an Amazon Web Services (AWS) instance.

In particular, we have chosen the r5.large. so that you can easily reproduce the results.

The following table provides basic information about the computer used in this setting.

Operating systemWindows 10 Enterprise
ProcessorIntel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz
Installed memory (RAM):16.0 GB
System type:64-bit Operating system, x64-based processor

Once the computer is chosen, we install TensorFlow (version 2.1.0), PyTorch (version 1.7.0), and Neural Designer (version 5.0.0) on it.

Results

The last step is to run the benchmark application using TensorFlow, PyTorch, and Neural Designer.

Then, we compare the capacity results provided by those platforms.

The following table indicates whether each platform can load the various data files.

The blue check means that the platform can load it and the orange cross means that it is not able.

Floating-point
number (x109)
TensorFlowPyTorchNeural Designer
1
2
3
4
5
6
7
8
9
10

As we can see, the maximum capacity of both TensorFlow and PyTorch is 5 x 109 data, and the maximum capacity of Neural Designer is 9 x 109 data.

These results can also be depicted graphically.

From these results, we can conclude that Neural Designer can load a dataset 1.8 times larger than TensorFlow and PyTorch.

The following picture shows that Neural Designer can train a neural network with 9 billion data points in a 16GB RAM computer.

The following picture shows how TensorFlow runs out of memory when trying to load a data file containing 6 billion data.

As we can see, when the external module Python pandas in TensorFlow tries to load 6 billion data, the platform crashes due to lack of RAM. TensorFlow’s maximum capacity is 5 billion data.

The following picture shows how PyTorch also runs out of memory when loading a data file containing 6 billion data.

Again, when the external Python Pandas module in PyTorch attempts to load 6 billion data points, it crashes. PyTorch’s maximum capacity is 5 billion data.

Conclusions

The maximum capacity in both TensorFlow and PyTorch is 5 billion data points, and the maximum capacity in Neural Designer is 9 billion.

This difference is because TensorFlow and PyTorch utilize an external module (Python’s Pandas) to load data. In contrast, Neural Designer uses its function to load data, which offers an advantage.

Indeed, Python is a high-level programming language. However, this causes a lower capacity for Python tools to load data.

To reproduce these results, download the free trial of Neural Designer and follow the steps described in this article.

Related posts