{"id":3411,"date":"2025-07-03T10:59:21","date_gmt":"2025-07-03T08:59:21","guid":{"rendered":"https:\/\/neuraldesigner.com\/blog\/perceptron-the-main-component-of-neural-networks\/"},"modified":"2025-11-27T15:03:52","modified_gmt":"2025-11-27T14:03:52","slug":"perceptron-the-main-component-of-neural-networks","status":"publish","type":"blog","link":"https:\/\/www.neuraldesigner.com\/blog\/perceptron-the-main-component-of-neural-networks\/","title":{"rendered":"Mathematics of the perceptron neuron model"},"content":{"rendered":"\t\t<div data-elementor-type=\"wp-post\" data-elementor-id=\"3411\" class=\"elementor elementor-3411\" data-elementor-post-type=\"blog\">\n\t\t\t\t\t\t<section class=\"elementor-section elementor-top-section elementor-element elementor-element-42b7242 elementor-section-boxed elementor-section-height-default elementor-section-height-default\" data-id=\"42b7242\" data-element_type=\"section\">\n\t\t\t\t\t\t<div class=\"elementor-container elementor-column-gap-default\">\n\t\t\t\t\t<div class=\"elementor-column elementor-col-100 elementor-top-column elementor-element elementor-element-5377ed24\" data-id=\"5377ed24\" data-element_type=\"column\">\n\t\t\t<div class=\"elementor-widget-wrap elementor-element-populated\">\n\t\t\t\t\t\t<div class=\"elementor-element elementor-element-48bb922b elementor-widget elementor-widget-text-editor\" data-id=\"48bb922b\" data-element_type=\"widget\" data-widget_type=\"text-editor.default\">\n\t\t\t\t\t\t\t\t\t<section>One of the hottest topics of artificial intelligence and machine learning are&nbsp;<a href=\"https:\/\/www.neuraldesigner.com\/learning\/neural-networks-tutorial\">neural networks<\/a>. These are computational models based on the brain&#8217;s structure, whose most significant property is their ability to learn from data.<p><\/p>\n<p><a href=\"https:\/\/www.neuraldesigner.com\/learning\/neural-networks-tutorial\">Neural networks<\/a>&nbsp;are usually arranged as sequences of layers. In turn, layers are made up of individual neurons. Therefore, neurons are the basic information processing units in neural networks.<\/p>\n<p>The most widely used neuron model is the perceptron. This is the neuron model behind&nbsp;<a href=\"https:\/\/www.neuraldesigner.com\/learning\/tutorials\/neural-network#PerceptronLayer\">perceptron layers<\/a> (also called dense layers), which are present in the majority of&nbsp;<a href=\"https:\/\/www.neuraldesigner.com\/learning\/tutorials\/neural-network\">neural networks<\/a>.<\/p>\n<p>In this post, we explain the mathematics of the perceptron neuron model:<\/p>\n<ol>\n<li><a href=\"#PerceptronElements\">Perceptron elements<\/a>.<\/li>\n<li><a href=\"#NeuronParameters\">Neuron parameters<\/a>.<\/li>\n<li><a href=\"#CombinationFunction\">Combination function<\/a>.<\/li>\n<li><a href=\"#ActivationFunction\">Activation function<\/a>.<\/li>\n<li><a href=\"#OutputFunction\">Output function<\/a>.<\/li>\n<li><a href=\"#Conclusions\">Conclusions<\/a>.<\/li>\n<\/ol>\n<\/section>\n<section id=\"PerceptronElements\">\n<h2>1. Perceptron elements<\/h2>\n<p>The following figure is a graphical representation of a perceptron.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/www.neuraldesigner.com\/images\/neuron_model.webp\" alt=\"Neuron model\"><\/p>\n<p>In the above neuron, we can see the following elements:<\/p>\n<ul>\n<li>The inputs \\( \\mathbf{x}=(x_1, \\ldots, x_n) \\).<\/li>\n<li>The bias \\( b \\)  and the synaptic weights \\( \\mathbf{w}=(w_1, \\ldots, w_n) \\).<\/li>\n<li>The combination function, \\( c(\\cdot) \\) .<\/li>\n<li>The activation function \\( a(\\cdot) \\) .<\/li>\n<li>The output y.<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p> For example, consider the neuron in the following figure, with three inputs.<br>            It transforms the inputs \\( \\mathbf{x}=(x_1,x_2,x_3) \\) into a single output \\( y \\).<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/www.neuraldesigner.com\/images\/neuron_example.webp\" alt=\"Neuron example \"><\/p>\n<p>In the above neuron, we can see the following elements:<\/p>\n<ul>\n<li>The inputs \\( \\mathbf{x}=(x_1,x_2,x_3) \\).<\/li>\n<li>The neuron parameters, which are the set \\( b=-0.5 \\)  and \\( \\mathbf{w}=(1.0,-0.75,0.25) \\).<\/li>\n<li>The combination function, \\( c(\u00b7) \\), merges the inputs with the bias and the synaptic weights.<\/li>\n<li>The activation function, which is set to be the hyperbolic tangent, \\( \\tanh(\\cdot) \\) , and takes that combination to produce the output from the neuron.<\/li>\n<li>The output \\( y \\).<\/li>\n<\/ul>\n<\/section>\n<section id=\"NeuronParameters\">\n<h2>2. Neuron parameters<\/h2>\n<p>The neuron parameters consist of bias and a set of synaptic weights.<\/p>\n<ul>\n<li>The bias \\( b \\)  is a real number.<\/li>\n<li>The synaptic weights \\( \\mathbf{w}=(w_1,\\ldots,w_n) \\) is a vector of size the number of inputs. <\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p>Therefore, the total number of parameters is ( 1+n ), being ( n ) the number of neurons&#8217; inputs.<\/p>\n<p>Consider the perceptron of the example above. That neuron model has a bias and three synaptic weights:<\/p>\n<ul>\n<li>The bias is \\( b = -0.5 \\).<\/li>\n<li>The synaptic weight vector is \\( \\mathbf{w}=(1.0,-0.75,0.25) \\).<\/li>\n<\/ul>\n<p>&nbsp;<\/p>\n<p>The number of parameters in this neuron is ( 1+3=4 ).<\/p>\n<\/section>\n<section id=\"CombinationFunction\">\n<h2>3. Combination function<\/h2>\n<p>The combination function takes the input vector ( x ) to produce a combined value, or net input, ( c ). The combination is computed as bias plus a linear combination of the synaptic weights and the inputs in the perceptron,<\/p>\n<p> $$<br>        c = \\sum_{i=1}^{n} w_i \\cdot x_i,<br>        $$<\/p>\n<p>for ( i=1,ldots,n ).<\/p>\n<p>Note that the bias increases or reduces the net input to the activation function, depending on whether it is positive or negative.<br>The bias is sometimes represented as a synaptic weight connected to an input fixed to ( +1 ).<\/p>\n<p>\n            Consider the neuron of our example.<br>            The combination value of this perceptron for an input vector \\( \\mathbf{x} = (-0.8,0.2,-0.4) \\)  is\n        <\/p>\n<p>$$<br>c = -0.5 + (1.0\u00b7-0.8)<br>+ (-0.75\u00b70.2) + (0.25\u00b7-0.4)<br>= -1.55.<br>$$<\/p>\n<\/section>\n<section id=\"ActivationFunction\">\n<h2>4. Activation function<\/h2>\n<p>The activation function defines the output from the neuron in terms of its combination. In practice, we can consider many useful activation functions. Four of the most used are the following:<\/p>\n<ul>\n<li><a href=\"#HyperbolicTangentActivation\">Hyperbolic tangent activation<\/a>.<\/li>\n<li><a href=\"#RectifiedLinearActivation\">Rectified linear (ReLU) activation<\/a>.<\/li>\n<li><a href=\"#Linear activation\">Linear activation<\/a>.<\/li>\n<li><a href=\"#LogisticActivation\">Logistic activation<\/a>.<\/li>\n<\/ul>\n<h3 id=\"HyperbolicTangentActivation\">Hyperbolic tangent activation<\/h3>\n<p>The hyperbolic tangent is defined by<\/p>\n<p>$$<br>a = tanh{(c)}.<br>$$<\/p>\n<p>This activation function is represented in the next figure.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/www.neuraldesigner.com\/images\/hyperbolic_tangent.webp\" alt=\"Hyperbolic tangent activation function\"><\/p>\n<p>As we can see, the hyperbolic tangent has a sigmoid shape and varies in the range (-1,1). This activation is a monotonous crescent function that perfectly balances linear and non-linear behaviour.<\/p>\n<p>In our example, the combination value is ( c = -1.55 ). As the chosen function is the hyperbolic tangent, the activation of this neuron is<\/p>\n<p>$$<br>a = tanh{(-1.55)}<br>= -0.91<br>$$<\/p>\n<p>The hyperbolic tangent function is very used in the hidden layers of neural networks for&nbsp;<a href=\"https:\/\/www.neuraldesigner.com\/learning\/tutorials\/neural-networks-applications#Approximation\">approximation<\/a> and <a href=\"https:\/\/www.neuraldesigner.com\/learning\/tutorials\/neural-networks-applications#Classification\">classification<\/a> tasks.<\/p>\n<h3 id=\"RectifiedLinearActivation\">Rectified linear (ReLU) activation<\/h3>\n<p>The rectified linear activation function (also known as ReLU) is another non-linear activation function that has gained popularity in machine learning. The activation is zero for negative combinations. The activation is equal to the combination when the combination is zero or positive.<\/p>\n<p>$$activation = \\left\\{ \\begin{array}{lll}<br>        0 &amp;if&amp; \\textrm{$combination &lt; 0$} \\\\<br>        combination &amp;if&amp; \\textrm{$combination \\geq 0$}<br>        \\end{array} \\right. $$<br>The ReLU function is represented in the next figure.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/www.neuraldesigner.com\/images\/rectified-linear-activation.webp\" alt=\"rectified linear activation function\"><\/p>\n<p>An advantage of the ReLU function is that it is more computationally efficient than other non-linear activation functions,<br>due to its simplicity. The ReLU function is very used in the hidden layers of neural networks for&nbsp;<a href=\"https:\/\/www.neuraldesigner.com\/learning\/tutorials\/neural-networks-applications#Approximation\">approximation<\/a>&nbsp;and <a href=\"https:\/\/www.neuraldesigner.com\/learning\/tutorials\/neural-networks-applications#Classification\">classification<\/a> tasks.<\/p>\n<h3 id=\"LinearActivation\">Linear activation<\/h3>\n<p>For the linear activation function, we have<\/p>\n<p>$$<br>a = c<br>$$<\/p>\n<p>Thus, the output of a neuron with a linear activation function is equal to its combination. The following figure plots the linear activation function.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/www.neuraldesigner.com\/images\/linear.webp\" alt=\"Linear activation function\"><\/p>\n<p>The linear activation function is very used in the output layer of&nbsp;<a href=\"https:\/\/www.neuraldesigner.com\/learning\/tutorials\/neural-networks-applications#Approximation\">approximation<\/a>&nbsp;neural networks.<\/p>\n<h3 id=\"LogisticActivation\">Logistic activation<\/h3>\n<p>As the hyperbolic tangent, the logistic function has a sigmoid shape. The logistic function is defined by<\/p>\n<p>        $$<br>        a = \\frac{1}{1+\\exp{(-c)}}.<br>        $$<\/p>\n<p>This activation is represented in the next figure.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/www.neuraldesigner.com\/images\/logistic.webp\" alt=\"Logistic activation function\"><\/p>\n<p>As we can see, the image of the logistic function is ( (0,1) ). This is a suitable property because we can interpret the outputs as probabilities. Therefore, the logistic function is widely used in the output layer of neural networks for <a href=\"https:\/\/www.neuraldesigner.com\/learning\/tutorials\/neural-networks-applications#Classification\">binary classification<\/a>.<\/p>\n<\/section>\n<section id=\"OutputFunction\">\n<h2>5. Output function<\/h2>\n<p>The output calculation is the most critical function in the perceptron. Given a set of input signals to the neuron, it computes the output signal from it. The output function is represented in terms of the composition of the combination and the activation functions.<\/p>\n<p>The following figure is an activity diagram of how the information is propagated in the perceptron.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/www.neuraldesigner.com\/images\/propagation.webp\" alt=\"Propagation\"><\/p>\n<p>Therefore, the final expression of the output from a neuron as a function of the input to it is<\/p>\n<p> $$<br>        y = a (b+w\\cdot x)<br>        $$<\/p>\n<p>\n            Consider the perceptron of our example.<br>            If we apply an input \\( \\mathbf{x} = (-0.8,0.2,-0.4) \\), the output y is the following\n        <\/p>\n<p>$$y = tanh{(-0.5 + (1.0\u00b7-0.8) + (-0.75\u00b70.2) + (0.25\u00b7-0.4))}<br>= tanh{(-1.55)}<br>= -0.91$$<\/p>\n<p>As we can see, the output function merges the combination and the activation functions.<\/p>\n<\/section>\n<section id=\"Conclusions\">\n<h2>6. Conclusions<\/h2>\n<p>A neuron is a mathematical model of the behaviour of a single neuron in a biological nervous system.<\/p>\n<p>A single neuron can solve some simple tasks, but the power of neural networks comes when many of them are arranged in layers and connected in a network architecture.<\/p>\n<p><img decoding=\"async\" style=\"width: 400px; max-width: 100%;\" src=\"https:\/\/www.neuraldesigner.com\/images\/neural_network.webp\" alt=\"Deep Neural Network\"><\/p>\n<p>Although we have seen the functioning of the perceptron in this post, other neuron models have different characteristics and are used for different purposes.<\/p>\n<p>Some of them are the <a href=\"https:\/\/www.neuraldesigner.com\/learning\/tutorials\/neural-network#ScalingLayer\">scaling neuron<\/a>, the principal components neuron, or the <a href=\"https:\/\/www.neuraldesigner.com\/learning\/tutorials\/neural-network#UnscalingLayer\">unscaling neuron<\/a>.<\/p>\n<p>Some neuron models only make sense when they are contextualized in a layer and cannot be defined individually. Some of these are the recurrent, long-short term memory (LSTM) or&nbsp;<a href=\"https:\/\/www.neuraldesigner.com\/learning\/tutorials\/neural-network#ProbabilisticLayer\">probabilistic<\/a> layers.<\/p>\n<\/section>\n<section>\n<h2>Related posts<\/h2>\n<\/section>\t\t\t\t\t\t\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/div>\n\t\t\t\t\t<\/div>\n\t\t<\/section>\n\t\t\t\t<\/div>\n\t\t","protected":false},"author":13,"featured_media":1730,"template":"","categories":[],"tags":[36],"class_list":["post-3411","blog","type-blog","status-publish","has-post-thumbnail","hentry","tag-tutorials"],"acf":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v26.4 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Mathematics of the perceptron neuron model<\/title>\n<meta name=\"description\" content=\"Learn the mathematics of the perceptron neuron model (aka dense or fully connected), the most widely used neuron model in machine learning.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/www.neuraldesigner.com\/blog\/perceptron-the-main-component-of-neural-networks\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Mathematics of the perceptron neuron model\" \/>\n<meta property=\"og:description\" content=\"Learn the mathematics of the perceptron neuron model (aka dense or fully connected), the most widely used neuron model in machine learning.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/www.neuraldesigner.com\/blog\/perceptron-the-main-component-of-neural-networks\/\" \/>\n<meta property=\"og:site_name\" content=\"Neural Designer\" \/>\n<meta property=\"article:modified_time\" content=\"2025-11-27T14:03:52+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/www.neuraldesigner.com\/wp-content\/uploads\/2023\/06\/perceptron-neuron-blog.webp\" \/>\n\t<meta property=\"og:image:width\" content=\"1200\" \/>\n\t<meta property=\"og:image:height\" content=\"628\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/webp\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:site\" content=\"@NeuralDesigner\" \/>\n<meta name=\"twitter:label1\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data1\" content=\"7 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/www.neuraldesigner.com\/blog\/perceptron-the-main-component-of-neural-networks\/\",\"url\":\"https:\/\/www.neuraldesigner.com\/blog\/perceptron-the-main-component-of-neural-networks\/\",\"name\":\"Mathematics of the perceptron neuron model\",\"isPartOf\":{\"@id\":\"https:\/\/www.neuraldesigner.com\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/www.neuraldesigner.com\/blog\/perceptron-the-main-component-of-neural-networks\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/www.neuraldesigner.com\/blog\/perceptron-the-main-component-of-neural-networks\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/www.neuraldesigner.com\/wp-content\/uploads\/2023\/06\/perceptron-neuron-blog.webp\",\"datePublished\":\"2025-07-03T08:59:21+00:00\",\"dateModified\":\"2025-11-27T14:03:52+00:00\",\"description\":\"Learn the mathematics of the perceptron neuron model (aka dense or fully connected), the most widely used neuron model in machine learning.\",\"breadcrumb\":{\"@id\":\"https:\/\/www.neuraldesigner.com\/blog\/perceptron-the-main-component-of-neural-networks\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/www.neuraldesigner.com\/blog\/perceptron-the-main-component-of-neural-networks\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.neuraldesigner.com\/blog\/perceptron-the-main-component-of-neural-networks\/#primaryimage\",\"url\":\"https:\/\/www.neuraldesigner.com\/wp-content\/uploads\/2023\/06\/perceptron-neuron-blog.webp\",\"contentUrl\":\"https:\/\/www.neuraldesigner.com\/wp-content\/uploads\/2023\/06\/perceptron-neuron-blog.webp\",\"width\":1200,\"height\":628},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/www.neuraldesigner.com\/blog\/perceptron-the-main-component-of-neural-networks\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/www.neuraldesigner.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Blog\",\"item\":\"https:\/\/www.neuraldesigner.com\/blog\/\"},{\"@type\":\"ListItem\",\"position\":3,\"name\":\"Mathematics of the perceptron neuron model\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/www.neuraldesigner.com\/#website\",\"url\":\"https:\/\/www.neuraldesigner.com\/\",\"name\":\"Neural Designer\",\"description\":\"Explanable AI Platform\",\"publisher\":{\"@id\":\"https:\/\/www.neuraldesigner.com\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/www.neuraldesigner.com\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/www.neuraldesigner.com\/#organization\",\"name\":\"Neural Designer\",\"url\":\"https:\/\/www.neuraldesigner.com\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/www.neuraldesigner.com\/#\/schema\/logo\/image\/\",\"url\":\"https:\/\/www.neuraldesigner.com\/wp-content\/uploads\/2023\/05\/logo-neural-1.png\",\"contentUrl\":\"https:\/\/www.neuraldesigner.com\/wp-content\/uploads\/2023\/05\/logo-neural-1.png\",\"width\":1024,\"height\":223,\"caption\":\"Neural Designer\"},\"image\":{\"@id\":\"https:\/\/www.neuraldesigner.com\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/x.com\/NeuralDesigner\",\"https:\/\/es.linkedin.com\/showcase\/neuraldesigner\/\"]}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Mathematics of the perceptron neuron model","description":"Learn the mathematics of the perceptron neuron model (aka dense or fully connected), the most widely used neuron model in machine learning.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/www.neuraldesigner.com\/blog\/perceptron-the-main-component-of-neural-networks\/","og_locale":"en_US","og_type":"article","og_title":"Mathematics of the perceptron neuron model","og_description":"Learn the mathematics of the perceptron neuron model (aka dense or fully connected), the most widely used neuron model in machine learning.","og_url":"https:\/\/www.neuraldesigner.com\/blog\/perceptron-the-main-component-of-neural-networks\/","og_site_name":"Neural Designer","article_modified_time":"2025-11-27T14:03:52+00:00","og_image":[{"width":1200,"height":628,"url":"https:\/\/www.neuraldesigner.com\/wp-content\/uploads\/2023\/06\/perceptron-neuron-blog.webp","type":"image\/webp"}],"twitter_card":"summary_large_image","twitter_site":"@NeuralDesigner","twitter_misc":{"Est. reading time":"7 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/www.neuraldesigner.com\/blog\/perceptron-the-main-component-of-neural-networks\/","url":"https:\/\/www.neuraldesigner.com\/blog\/perceptron-the-main-component-of-neural-networks\/","name":"Mathematics of the perceptron neuron model","isPartOf":{"@id":"https:\/\/www.neuraldesigner.com\/#website"},"primaryImageOfPage":{"@id":"https:\/\/www.neuraldesigner.com\/blog\/perceptron-the-main-component-of-neural-networks\/#primaryimage"},"image":{"@id":"https:\/\/www.neuraldesigner.com\/blog\/perceptron-the-main-component-of-neural-networks\/#primaryimage"},"thumbnailUrl":"https:\/\/www.neuraldesigner.com\/wp-content\/uploads\/2023\/06\/perceptron-neuron-blog.webp","datePublished":"2025-07-03T08:59:21+00:00","dateModified":"2025-11-27T14:03:52+00:00","description":"Learn the mathematics of the perceptron neuron model (aka dense or fully connected), the most widely used neuron model in machine learning.","breadcrumb":{"@id":"https:\/\/www.neuraldesigner.com\/blog\/perceptron-the-main-component-of-neural-networks\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/www.neuraldesigner.com\/blog\/perceptron-the-main-component-of-neural-networks\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.neuraldesigner.com\/blog\/perceptron-the-main-component-of-neural-networks\/#primaryimage","url":"https:\/\/www.neuraldesigner.com\/wp-content\/uploads\/2023\/06\/perceptron-neuron-blog.webp","contentUrl":"https:\/\/www.neuraldesigner.com\/wp-content\/uploads\/2023\/06\/perceptron-neuron-blog.webp","width":1200,"height":628},{"@type":"BreadcrumbList","@id":"https:\/\/www.neuraldesigner.com\/blog\/perceptron-the-main-component-of-neural-networks\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/www.neuraldesigner.com\/"},{"@type":"ListItem","position":2,"name":"Blog","item":"https:\/\/www.neuraldesigner.com\/blog\/"},{"@type":"ListItem","position":3,"name":"Mathematics of the perceptron neuron model"}]},{"@type":"WebSite","@id":"https:\/\/www.neuraldesigner.com\/#website","url":"https:\/\/www.neuraldesigner.com\/","name":"Neural Designer","description":"Explanable AI Platform","publisher":{"@id":"https:\/\/www.neuraldesigner.com\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/www.neuraldesigner.com\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/www.neuraldesigner.com\/#organization","name":"Neural Designer","url":"https:\/\/www.neuraldesigner.com\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/www.neuraldesigner.com\/#\/schema\/logo\/image\/","url":"https:\/\/www.neuraldesigner.com\/wp-content\/uploads\/2023\/05\/logo-neural-1.png","contentUrl":"https:\/\/www.neuraldesigner.com\/wp-content\/uploads\/2023\/05\/logo-neural-1.png","width":1024,"height":223,"caption":"Neural Designer"},"image":{"@id":"https:\/\/www.neuraldesigner.com\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/x.com\/NeuralDesigner","https:\/\/es.linkedin.com\/showcase\/neuraldesigner\/"]}]}},"_links":{"self":[{"href":"https:\/\/www.neuraldesigner.com\/api\/wp\/v2\/blog\/3411","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.neuraldesigner.com\/api\/wp\/v2\/blog"}],"about":[{"href":"https:\/\/www.neuraldesigner.com\/api\/wp\/v2\/types\/blog"}],"author":[{"embeddable":true,"href":"https:\/\/www.neuraldesigner.com\/api\/wp\/v2\/users\/13"}],"version-history":[{"count":1,"href":"https:\/\/www.neuraldesigner.com\/api\/wp\/v2\/blog\/3411\/revisions"}],"predecessor-version":[{"id":21401,"href":"https:\/\/www.neuraldesigner.com\/api\/wp\/v2\/blog\/3411\/revisions\/21401"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/www.neuraldesigner.com\/api\/wp\/v2\/media\/1730"}],"wp:attachment":[{"href":"https:\/\/www.neuraldesigner.com\/api\/wp\/v2\/media?parent=3411"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.neuraldesigner.com\/api\/wp\/v2\/categories?post=3411"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.neuraldesigner.com\/api\/wp\/v2\/tags?post=3411"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}