{"id":3421,"date":"2025-10-14T10:59:21","date_gmt":"2025-10-14T08:59:21","guid":{"rendered":"https:\/\/neuraldesigner.com\/blog\/the-rosenbrock-benchmark-for-machine-learning\/"},"modified":"2025-11-28T11:00:35","modified_gmt":"2025-11-28T10:00:35","slug":"the-rosenbrock-benchmark-for-machine-learning","status":"publish","type":"blog","link":"https:\/\/www.neuraldesigner.com\/blog\/the-rosenbrock-benchmark-for-machine-learning\/","title":{"rendered":"The Rosenbrock benchmark suite for machine learning"},"content":{"rendered":"<p>There are numerous repositories with a large number of datasets for machine learning.<\/p>\n<section>Some of the most important ones are the <a href=\"https:\/\/archive.ics.uci.edu\/ml\/index.php\">UCI Machine Learning Repository<\/a>\u00a0or <a href=\"https:\/\/www.kaggle.com\/\">Kaggle<\/a>.However, using those datasets for performance benchmarking might be difficult.Indeed, those datasets lack the consistency required by key performance indicators such as data capacity, training speed, model accuracy, and inference speed.<\/p>\n<p>This post introduces a family of datasets known as the Rosenbrock Dataset Suite. The objective is to facilitate benchmarking of machine learning platforms.<\/p>\n<h3>Contents<\/h3>\n<ul>\n<li><a href=\"#Introduction\">Introduction<\/a>.<\/li>\n<li><a href=\"#RosenbrockFunction\">Rosenbrock function<\/a>.<\/li>\n<li><a href=\"#CppCode\">C++ code<\/a>.<\/li>\n<li><a href=\"#PythonCode\">Python code<\/a>.<\/li>\n<li><a href=\"#DatasetsDownload\">Datasets download<\/a>.<\/li>\n<li><a href=\"#Conclusions\">Conclusions<\/a>.<\/li>\n<\/ul>\n<\/section>\n<section id=\"RosenbrockFunction\">\n<h2>Introduction<\/h2>\n<p><img decoding=\"async\" src=\"https:\/\/www.neuraldesigner.com\/images\/data-capacity.svg\" \/><\/p>\n<h3>Data capacity tests<\/h3>\n<p>The data capacity of a machine learning platform can be defined as the largest dataset that it can process.<\/p>\n<p>In this way, the tool should perform all the essential tasks with that dataset.<\/p>\n<p>Data capacity can be measured as the number of samples that a machine learning platform can process for a given number of variables.<\/p>\n<p>The most significant drawback is that they usually have a fixed number of variables and samples. This makes it difficult to test how a machine learning platform behaves with different dataset sizes.<\/p>\n<p><!--Therefore, to perform data capacity tests, we need a suite XXX--><\/p>\n<p><img decoding=\"async\" src=\"https:\/\/www.neuraldesigner.com\/images\/training-speed.svg\" \/><\/p>\n<h3>Training speed tests<\/h3>\n<p>Training speed is defined as the number of samples per second that a machine learning platform processes during training.<\/p>\n<p>The training speed depends very much on the dataset size. For instance, CPUs might provide faster training than GPUs for small datasets and slower training for big datasets.<\/p>\n<p>Therefore, we need to generate datasets with arbitrary variables and samples to see how these sizes affect training performance.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/www.neuraldesigner.com\/images\/model-precision.svg\" \/><\/p>\n<h3>Model precision tests<\/h3>\n<p>Precision can be defined as the mean error of a model against a testing data set.<\/p>\n<p>Most real datasets are noisy. This means that the full fit of the model to the data cannot be verified.<\/p>\n<p>Therefore, it is desirable to have datasets with which we can potentially build models with zero error.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/www.neuraldesigner.com\/images\/inference-speed.svg\" \/><\/p>\n<h3>Inference speed tests<\/h3>\n<p>The inference speed is the time to calculate the outputs as a function of the inputs. Inference speed is measured as the number of samples per second.<\/p>\n<p>As before, we need to generate datasets with an arbitrary number of variables and samples to see how these sizes affect training performance.<\/p>\n<\/section>\n<section id=\"RosenbrockFunction\">\n<h2>Rosenbrock function<\/h2>\n<p>The Rosenbrock function is a non-convex function, introduced by Howard H. Rosenbrock in 1960. It is also known as Rosenbrock&#8217;s valley or Rosenbrock&#8217;s banana function.<\/p>\n<p>It is used as a performance test problem for optimization algorithms.<\/p>\n<p>In mathematical optimization, n is the number of samples, and m is the number of input variables.<\/p>\n<p>$$x_{i,j} = rand(-1,+1)$$<\/p>\n<p>$$y_{j} = sum_{i=1}^{n-1}left[ 100left(x_{i+1}-x_{i}^{2} right)^{2}+left(1-x_{i} right)^{2} right]$$<\/p>\n<p><!-- The next chart is a plot of the Rosenbrock function in two variables. --><\/p>\n<p><!-- As we can see, XXX --><\/p>\n<p>As the outputs from the Rosenbrock function are real values, this dataset suite is suitable for approximation problems.<br \/>\nTherefore, we cannot test the performance of classification or forecasting applications with that.<\/p>\n<p>The Rosenbrock dataset suite allows the creation of datasets with any number of variables and samples. Thus, this suite is perfect for performing data capacity, training speed, and inference speed tests.<\/p>\n<p>The Rosenbrock data is extracted from a deterministic function with a pretty complex shape. It should be possible to build a machine learning model of that function with any desired degree of accuracy. Therefore, Rosenbrock datasets are ideal for model precision tests.<\/p>\n<\/section>\n<section id=\"CppCode\">\n<h2>C++ code<\/h2>\n<p>The following code shows how to generate a Rosenbrock dataset using C++.<\/p>\n<p><!-- HTML generated using hilite.me --><\/p>\n<pre style=\"margin: 0; line-height: 125%;\">\/\/ System includes\r\n#include &lt;iostream&gt;\r\n#include &lt;fstream&gt;\r\n#include &lt;string&gt;\r\n#include &lt;random&gt;\r\nusing namespace std;\r\nint main(void)\r\n{\r\n    cout &lt;&lt; \"Rosenbrock Dataset Generator.\" &lt;&lt; endl;\r\n            const int inputs_number  = 2;\r\n            const int samples_number = 10000;\r\n            const string filename = \"G:\/R__\" + to_string(samples_number)+ \"_samples_\"+ to_string(inputs_number) + \"_inputs.csv\";\r\n            float inputs[inputs_number];\r\n    default_random_engine generator;\r\n    uniform_real_distribution&lt;float&gt; distribution(-1.0, 1.0);\r\n    ofstream file(filename);\r\n            for(int j=0; j &lt; samples_number; j++)\r\n    {\r\n            float rosenbrock = 0.0;\r\n            for(int i=0; i &lt; inputs_number; i++)\r\n        {\r\n            inputs[i] = distribution(generator);\r\n            file &lt;&lt; inputs[i] &lt;&lt; \",\";\r\n        }\r\n            for(int i = 0; i&lt; inputs_number - 1; i++)\r\n        {\r\n            rosenbrock +=\r\n                (1 - inputs[i])*(1 - inputs[i])\r\n            + 100*(inputs[i+1]-inputs[i] * inputs[i])*\r\n                (inputs[i+1]-inputs[i]*inputs[i]);\r\n        }\r\n        file &lt;&lt; rosenbrock &lt;&lt; endl;\r\n      }\r\n    file.close();\r\n            return 0;\r\n}\r\n<\/pre>\n<\/section>\n<section id=\"PythonCode\">\n<h2>Python code<\/h2>\n<p>You can also generate a Rosenbrock dataset with the following Python code.<\/p>\n<p><!-- HTML generated using hilite.me --><\/p>\n<pre style=\"margin: 0; line-height: 125%;\">import numpy as np\r\nimport pandas as pd \r\nimport random\r\nsamples_number = 10000\r\ninputs_number = 2\r\ndistribution = random.uniform(-1, 1)\r\ninputs = np.random.uniform(-1.0, 1.0, size = (samples_number, inputs_number))\r\nrosenbrock = []\r\nfor j in range (samples_number):\r\n    r = 0\r\n            for i in range(inputs_number-1):\r\n        r += (1.0 - inputs[j][i])*(1.0 - inputs[j][i])+100.0*((inputs[j][i+1]-inputs[j][i]*inputs[j][i])*(inputs[j][i+1]-inputs[j][i]*inputs[j][i]))\r\n    rosenbrock.append(r)\r\ndata = pd.concat([pd.DataFrame(inputs),pd.DataFrame(rosenbrock)], axis=1)\r\nfilename = \"G:\/R_\" + str(samples_number)+ \"_samples_\"+ str(inputs_number) + \"_variables_python.csv\";\r\ndata.to_csv(filename,index = False,sep = \",\")\r\n<\/pre>\n<p>Notice that the data is normalized between [-1,1].<\/p>\n<\/section>\n<section id=\"DataDownloads\">\n<h2>Datasets download<\/h2>\n<p>We provide the following datasets:<\/p>\n<table>\n<tbody>\n<tr>\n<th>RowsColumns<\/th>\n<th style=\"text-align: right;\">10<\/th>\n<th style=\"text-align: right;\">100<\/th>\n<th style=\"text-align: right;\">1000<\/th>\n<\/tr>\n<tr>\n<th style=\"text-align: right;\">1000<\/th>\n<td><a href=\"https:\/\/www.neuraldesigner.com\/wp-content\/uploads\/2025\/07\/rosenbrok_1000_10.csv\">rosenbrok_10<sup>3<\/sup>_10.csv<\/a><\/td>\n<td><a href=\"https:\/\/www.neuraldesigner.com\/wp-content\/uploads\/2025\/07\/rosenbrok_1000_100.csv\">rosenbrok_10<sup>3<\/sup>_10<sup>2<\/sup>.csv<\/a><\/td>\n<td><a href=\"https:\/\/www.neuraldesigner.com\/wp-content\/uploads\/2025\/07\/rosenbrok_1000_1000.zip\">rosenbrok_10<sup>3<\/sup>_10<sup>3<\/sup>.csv<\/a><\/td>\n<\/tr>\n<tr>\n<th style=\"text-align: right;\">10000<\/th>\n<td>rosenbrok_10<sup>4<\/sup>_10.csv<\/td>\n<td>rosenbrok_10<sup>4<\/sup>_10<sup>2<\/sup>.csv<\/td>\n<td>rosenbrok_10<sup>4<\/sup>_10<sup>3<\/sup>.csv<\/td>\n<\/tr>\n<tr>\n<th style=\"text-align: right;\">100000<\/th>\n<td>rosenbrok_10<sup>5<\/sup>_10.csv<\/td>\n<td>rosenbrok_10<sup>5<\/sup>_10<sup>2<\/sup>.csv<\/td>\n<td>rosenbrok_10<sup>5<\/sup>_10<sup>3<\/sup>.csv<\/td>\n<\/tr>\n<tr>\n<th style=\"text-align: right;\">1000000<\/th>\n<td>rosenbrok_10<sup>6<\/sup>_10.csv<\/td>\n<td>rosenbrok_10<sup>6<\/sup>_10<sup>2<\/sup>.csv<\/td>\n<td>rosenbrok_10<sup>6<\/sup>_10<sup>3<\/sup>.csv<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/section>\n<section id=\"Conclusions\">\n<h2>Conclusions<\/h2>\n<p>This blog introduces a function to measure machine learning platforms&#8217; data capacity, training speed, model accuracy, and inference speed.<\/p>\n<p>Rosenbrock datasets have a firm consistency and do not have noise. For this reason, it is a powerful alternative to datasets from popular repositories for benchmarking.<\/p>\n<p>The data science and machine learning platform <a href=\"https:\/\/www.neuraldesigner.com\/\">Neural Designer<\/a>\u00a0contains many utilities to perform descriptive, diagnostic, predictive, and prescriptive analytics easily.<\/p>\n<p>You can <a href=\"https:\/\/www.neuraldesigner.com\/free-trial\">download<\/a> Neural Designer now and try it for free.<\/p>\n<\/section>\n<section>\n<h2>Related posts<\/h2>\n<\/section>\n<style><![CDATA[ @media all and (max-width: 1000000px) { .content .x700 { display: block; } .content .x490 { display: none; } .content .x290 { display: none; } } @media all and (max-width: 800px) { .content .x700 { display: none; } .content .x490 { display: block; } .content .x290 { display: none; } } ]]><\/style>\n","protected":false},"author":23,"featured_media":2731,"template":"","categories":[35,30],"tags":[37],"class_list":["post-3421","blog","type-blog","status-publish","has-post-thumbnail","hentry","category-industry","category-tutorials","tag-platforms"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v26.4 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>The Rosenbrock benchmark suite for machine learning - Neural Designer<\/title>\n<meta name=\"description\" content=\"This post introduces a family of datasets known as the Rosenbrock Dataset Suite. The objective is to facilitate benchmarking of machine learning platforms.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.neuraldesigner.com\/blog\/the-rosenbrock-benchmark-for-machine-learning\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"The Rosenbrock benchmark suite for machine learning - Neural Designer\" \/>\n<meta property=\"og:description\" content=\"This post introduces a family of datasets known as the Rosenbrock Dataset Suite. The objective is to facilitate benchmarking of machine learning platforms.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.neuraldesigner.com\/blog\/the-rosenbrock-benchmark-for-machine-learning\/\" \/>\n<meta property=\"og:site_name\" content=\"Neural Designer\" \/>\n<meta property=\"article:modified_time\" content=\"2025-11-28T10:00:35+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.neuraldesigner.com\/wp-content\/uploads\/2023\/06\/Rosenbrock_gif.gif\" \/>\n\t<meta property=\"og:image:width\" content=\"520\" \/>\n\t<meta property=\"og:image:height\" content=\"320\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/gif\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:site\" content=\"@NeuralDesigner\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"5 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.neuraldesigner.com\/blog\/the-rosenbrock-benchmark-for-machine-learning\/\",\"url\":\"https:\/\/www.neuraldesigner.com\/blog\/the-rosenbrock-benchmark-for-machine-learning\/\",\"name\":\"The Rosenbrock benchmark suite for machine learning - Neural Designer\",\"isPartOf\":{\"@id\":\"https:\/\/www.neuraldesigner.com\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.neuraldesigner.com\/blog\/the-rosenbrock-benchmark-for-machine-learning\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.neuraldesigner.com\/blog\/the-rosenbrock-benchmark-for-machine-learning\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.neuraldesigner.com\/wp-content\/uploads\/2023\/06\/Rosenbrock_gif.gif\",\"datePublished\":\"2025-10-14T08:59:21+00:00\",\"dateModified\":\"2025-11-28T10:00:35+00:00\",\"description\":\"This post introduces a family of datasets known as the Rosenbrock Dataset Suite. The objective is to facilitate benchmarking of machine learning platforms.\",\"breadcrumb\":{\"@id\":\"https:\/\/www.neuraldesigner.com\/blog\/the-rosenbrock-benchmark-for-machine-learning\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.neuraldesigner.com\/blog\/the-rosenbrock-benchmark-for-machine-learning\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.neuraldesigner.com\/blog\/the-rosenbrock-benchmark-for-machine-learning\/#primaryimage\",\"url\":\"https:\/\/www.neuraldesigner.com\/wp-content\/uploads\/2023\/06\/Rosenbrock_gif.gif\",\"contentUrl\":\"https:\/\/www.neuraldesigner.com\/wp-content\/uploads\/2023\/06\/Rosenbrock_gif.gif\",\"width\":520,\"height\":320},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.neuraldesigner.com\/blog\/the-rosenbrock-benchmark-for-machine-learning\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.neuraldesigner.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Blog\",\"item\":\"https:\/\/www.neuraldesigner.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"The Rosenbrock benchmark suite for machine learning\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.neuraldesigner.com\/#website\",\"url\":\"https:\/\/www.neuraldesigner.com\/\",\"name\":\"Neural Designer\",\"description\":\"Explanable AI Platform\",\"publisher\":{\"@id\":\"https:\/\/www.neuraldesigner.com\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.neuraldesigner.com\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.neuraldesigner.com\/#organization\",\"name\":\"Neural Designer\",\"url\":\"https:\/\/www.neuraldesigner.com\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.neuraldesigner.com\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/www.neuraldesigner.com\/wp-content\/uploads\/2023\/05\/logo-neural-1.png\",\"contentUrl\":\"https:\/\/www.neuraldesigner.com\/wp-content\/uploads\/2023\/05\/logo-neural-1.png\",\"width\":1024,\"height\":223,\"caption\":\"Neural Designer\"},\"image\":{\"@id\":\"https:\/\/www.neuraldesigner.com\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/x.com\/NeuralDesigner\",\"https:\/\/es.linkedin.com\/showcase\/neuraldesigner\/\"]}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"The Rosenbrock benchmark suite for machine learning - Neural Designer","description":"This post introduces a family of datasets known as the Rosenbrock Dataset Suite. The objective is to facilitate benchmarking of machine learning platforms.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.neuraldesigner.com\/blog\/the-rosenbrock-benchmark-for-machine-learning\/","og_locale":"en_US","og_type":"article","og_title":"The Rosenbrock benchmark suite for machine learning - Neural Designer","og_description":"This post introduces a family of datasets known as the Rosenbrock Dataset Suite. The objective is to facilitate benchmarking of machine learning platforms.","og_url":"https:\/\/www.neuraldesigner.com\/blog\/the-rosenbrock-benchmark-for-machine-learning\/","og_site_name":"Neural Designer","article_modified_time":"2025-11-28T10:00:35+00:00","og_image":[{"url":"https:\/\/www.neuraldesigner.com\/wp-content\/uploads\/2023\/06\/Rosenbrock_gif.gif","width":520,"height":320,"type":"image\/gif"}],"twitter_card":"summary_large_image","twitter_site":"@NeuralDesigner","twitter_misc":{"Est. reading time":"5 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/www.neuraldesigner.com\/blog\/the-rosenbrock-benchmark-for-machine-learning\/","url":"https:\/\/www.neuraldesigner.com\/blog\/the-rosenbrock-benchmark-for-machine-learning\/","name":"The Rosenbrock benchmark suite for machine learning - Neural Designer","isPartOf":{"@id":"https:\/\/www.neuraldesigner.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.neuraldesigner.com\/blog\/the-rosenbrock-benchmark-for-machine-learning\/#primaryimage"},"image":{"@id":"https:\/\/www.neuraldesigner.com\/blog\/the-rosenbrock-benchmark-for-machine-learning\/#primaryimage"},"thumbnailUrl":"https:\/\/www.neuraldesigner.com\/wp-content\/uploads\/2023\/06\/Rosenbrock_gif.gif","datePublished":"2025-10-14T08:59:21+00:00","dateModified":"2025-11-28T10:00:35+00:00","description":"This post introduces a family of datasets known as the Rosenbrock Dataset Suite. The objective is to facilitate benchmarking of machine learning platforms.","breadcrumb":{"@id":"https:\/\/www.neuraldesigner.com\/blog\/the-rosenbrock-benchmark-for-machine-learning\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.neuraldesigner.com\/blog\/the-rosenbrock-benchmark-for-machine-learning\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.neuraldesigner.com\/blog\/the-rosenbrock-benchmark-for-machine-learning\/#primaryimage","url":"https:\/\/www.neuraldesigner.com\/wp-content\/uploads\/2023\/06\/Rosenbrock_gif.gif","contentUrl":"https:\/\/www.neuraldesigner.com\/wp-content\/uploads\/2023\/06\/Rosenbrock_gif.gif","width":520,"height":320},{"@type":"BreadcrumbList","@id":"https:\/\/www.neuraldesigner.com\/blog\/the-rosenbrock-benchmark-for-machine-learning\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.neuraldesigner.com\/"},{"@type":"ListItem","position":2,"name":"Blog","item":"https:\/\/www.neuraldesigner.com\/blog\/"},{"@type":"ListItem","position":3,"name":"The Rosenbrock benchmark suite for machine learning"}]},{"@type":"WebSite","@id":"https:\/\/www.neuraldesigner.com\/#website","url":"https:\/\/www.neuraldesigner.com\/","name":"Neural Designer","description":"Explanable AI Platform","publisher":{"@id":"https:\/\/www.neuraldesigner.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.neuraldesigner.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.neuraldesigner.com\/#organization","name":"Neural Designer","url":"https:\/\/www.neuraldesigner.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.neuraldesigner.com\/#\/schema\/logo\/image\/","url":"https:\/\/www.neuraldesigner.com\/wp-content\/uploads\/2023\/05\/logo-neural-1.png","contentUrl":"https:\/\/www.neuraldesigner.com\/wp-content\/uploads\/2023\/05\/logo-neural-1.png","width":1024,"height":223,"caption":"Neural Designer"},"image":{"@id":"https:\/\/www.neuraldesigner.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/NeuralDesigner","https:\/\/es.linkedin.com\/showcase\/neuraldesigner\/"]}]}},"_links":{"self":[{"href":"https:\/\/www.neuraldesigner.com\/api\/wp\/v2\/blog\/3421","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.neuraldesigner.com\/api\/wp\/v2\/blog"}],"about":[{"href":"https:\/\/www.neuraldesigner.com\/api\/wp\/v2\/types\/blog"}],"author":[{"embeddable":true,"href":"https:\/\/www.neuraldesigner.com\/api\/wp\/v2\/users\/23"}],"version-history":[{"count":1,"href":"https:\/\/www.neuraldesigner.com\/api\/wp\/v2\/blog\/3421\/revisions"}],"predecessor-version":[{"id":21449,"href":"https:\/\/www.neuraldesigner.com\/api\/wp\/v2\/blog\/3421\/revisions\/21449"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.neuraldesigner.com\/api\/wp\/v2\/media\/2731"}],"wp:attachment":[{"href":"https:\/\/www.neuraldesigner.com\/api\/wp\/v2\/media?parent=3421"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.neuraldesigner.com\/api\/wp\/v2\/categories?post=3421"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.neuraldesigner.com\/api\/wp\/v2\/tags?post=3421"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}