{"id":35,"date":"2022-01-27T19:28:53","date_gmt":"2022-01-27T19:28:53","guid":{"rendered":"https:\/\/blogs.oregonstate.edu\/mutex42\/?p=35"},"modified":"2022-01-27T19:40:51","modified_gmt":"2022-01-27T19:40:51","slug":"learning-about-neural-networks","status":"publish","type":"post","link":"https:\/\/blogs.oregonstate.edu\/mutex42\/2022\/01\/27\/learning-about-neural-networks\/","title":{"rendered":"Learning about Neural Networks"},"content":{"rendered":"\n<p><p>Neural Networks, and machine learning in general, contain a lot of very scary mathematic sounding terminology.&nbsp;&nbsp;But after having studied them for a few weeks now, they are actually quite easy to understand once you get past the vocabulary.&nbsp;&nbsp;This week\u2019s topic will be focused on Neural Networks, and how science can imitate nature to create some awesome things!<\/p><br><\/p>\n\n\n\n<p><p>Artificial Neural Networks (which will be referred to as NNs) are a supervised machine learning model inspired by biological neural networks.&nbsp;&nbsp;&nbsp;Artificial Neural Networks are made up of a collection of nodes called artificial neurons, which loosely model the neurons of a biological brain.&nbsp;&nbsp;Each artificial neuron can receive a signal, process it, and then signal all other neurons connected to it.&nbsp;&nbsp;In machine learning sense, this \u2018signal\u2019 is a real number, and the output of each neuron is computed by some linear or non-linear function on the sum of its inputs.&nbsp;&nbsp;Each connection is called an edge, and each neuron \/ edge pair has a weight which is adjusted as the artificial NN model is trained.&nbsp;<\/p><br><\/p>\n\n\n\n<figure class=\"wp-block-image size-full\"><img loading=\"lazy\" decoding=\"async\" width=\"640\" height=\"252\" src=\"https:\/\/osu-wams-blogs-uploads.s3.amazonaws.com\/blogs.dir\/5122\/files\/2022\/01\/640px-MultiLayerNeuralNetworkBigger_english.png\" alt=\"\" class=\"wp-image-39\" srcset=\"https:\/\/osu-wams-blogs-uploads.s3.amazonaws.com\/blogs.dir\/5122\/files\/2022\/01\/640px-MultiLayerNeuralNetworkBigger_english.png 640w, https:\/\/osu-wams-blogs-uploads.s3.amazonaws.com\/blogs.dir\/5122\/files\/2022\/01\/640px-MultiLayerNeuralNetworkBigger_english-300x118.png 300w\" sizes=\"auto, (max-width: 640px) 100vw, 640px\" \/><figcaption>Photo by Chrislb from Wikipedia Commons\nhttps:\/\/commons.wikimedia.org\/wiki\/File:MultiLayerNeuralNetwork_english.png<\/figcaption><\/figure>\n\n\n\n<p><p>NN models are typically architected as a collection of neurons grouped into layers.&nbsp;&nbsp;The layers are organized from left to right, with the left-most layer called the input layer and the right-most layer called the output layer.&nbsp;&nbsp;The input layer represents the input feature set by containing a neuron for each feature.&nbsp;&nbsp;The output layer represents the generated output from the learning model, and typically contains two neurons selected for use in this learning model.&nbsp;&nbsp;It contains neurons for each possible classification output and an attached probability of that output.<\/p><br><\/p>\n\n\n\n<p><p>The layers in the middle are known as the \u2018hidden layers\u2019.&nbsp;&nbsp;They comprise most of the neurons in the model and are the secret to the data manipulation to get a desired output.&nbsp;&nbsp;Each neuron in a layer receives an input from every neuron in the previous layer and provides it\u2019s output to every neuron in the next layer.&nbsp;&nbsp;Each neuron takes the mass of input data, passes it through a mathematical \u2018activation function\u2019, and then passes this output to the next layer.&nbsp;<\/p><br><\/p>\n\n\n\n<p><p>The activation function is a function that helps the NN learn complex patterns in the data and adjusts the weights of each neuron based on the affective rate of change it has on an output.&nbsp;&nbsp;These functions can be linear or non-linear, depending on if the NN model is supposed to generate a continuous (i.e. predict a stock price) or discrete (i.e. is this a picture of an apple) output.  A commonly used activation function is the Sigmoid Function which returns an output between 0 and 1.<\/p><br><\/p>\n\n\n\n<p><p>As the activation layer helps to consume inputs and adjust weights between layers, you may be wondering how the initial set of weights for each input feature is selected.&nbsp;&nbsp;This can be randomly selected at first to help the model consume a first series of information.&nbsp;&nbsp;These initial input weights are then adjusted via several methods.&nbsp;&nbsp;<\/p><br><\/p>\n\n\n\n<p><p>If you can represent the problem you are trying to solve via a differentiable objective function then you can use a process known as \u2018back propagation\u2019 which will retrain a neural network model, while optimizing the input weights in a way that reduces the most amount of error in the output layer.&nbsp;&nbsp;A typical back propagation method may use something like a mean-square error calculation to find ways to reduce error with the input weights.&nbsp;&nbsp;<\/p><br><\/p>\n\n\n\n<p><p>If your objective function is non-differentiable, then things get a bit more complicated.&nbsp;&nbsp;One of the most well-known solutions to this issue is to use a genetical machine learning algorithm to train the NN.&nbsp;&nbsp;A genetic algorithm would begin by creating a set of randomly selecting input weights, a set meaning it will train multiple version of your Neural Network.&nbsp;&nbsp;It will then check the accuracy of each NN, select the best results, and will generate another set of inputs based on some mutation \/ splicing of inputs from the best results (essentially creating the \u2018next\u2019 generation of inputs from a set of parent inputs).&nbsp;&nbsp;Genetic algorithms follow an evolutionary model to forming an optimized solution, which I think is pretty awesome.&nbsp;&nbsp;<\/p><br><\/p>\n\n\n\n<p><p>Another method is through reinforcement training, a training model which mimics how real-life learning occurs.&nbsp;&nbsp;Essentially, the reinforcement model would begin by selecting a random set of inputs until it gets a decent result.&nbsp;&nbsp;It will then \u2018learn\u2019 from that input and generate an informed set of inputs based on that result.&nbsp;&nbsp;This cycle will continue until it generates an optimized solution.&nbsp;&nbsp;It acts similarly to the genetic algorithm, but is based on reinforcing \u2018correct\u2019 guesses at the optimized solution.<\/p><br><\/p>\n\n\n\n<p><p>I\u2019ll be implementing a neural network which is trained via a genetic algorithm over the next few weeks.&nbsp;&nbsp;The goal of this neural net will be to create a way to visualize some of the transformations happening in the hidden layers.&nbsp;&nbsp;Will check in with you guys shortly with a project report!&nbsp;&nbsp;Thanks for reading.<\/p><br><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Neural Networks, and machine learning in general, contain a lot of very scary mathematic sounding terminology.&nbsp;&nbsp;But after having studied them for a few weeks now, they are actually quite easy to understand once you get past the vocabulary.&nbsp;&nbsp;This week\u2019s topic will be focused on Neural Networks, and how science can imitate nature to create some&hellip; <a class=\"more-link\" href=\"https:\/\/blogs.oregonstate.edu\/mutex42\/2022\/01\/27\/learning-about-neural-networks\/\">Continue reading <span class=\"screen-reader-text\">Learning about Neural Networks<\/span><\/a><\/p>\n","protected":false},"author":11967,"featured_media":38,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-35","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-uncategorized","entry"],"_links":{"self":[{"href":"https:\/\/blogs.oregonstate.edu\/mutex42\/wp-json\/wp\/v2\/posts\/35","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blogs.oregonstate.edu\/mutex42\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.oregonstate.edu\/mutex42\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.oregonstate.edu\/mutex42\/wp-json\/wp\/v2\/users\/11967"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.oregonstate.edu\/mutex42\/wp-json\/wp\/v2\/comments?post=35"}],"version-history":[{"count":13,"href":"https:\/\/blogs.oregonstate.edu\/mutex42\/wp-json\/wp\/v2\/posts\/35\/revisions"}],"predecessor-version":[{"id":51,"href":"https:\/\/blogs.oregonstate.edu\/mutex42\/wp-json\/wp\/v2\/posts\/35\/revisions\/51"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/blogs.oregonstate.edu\/mutex42\/wp-json\/wp\/v2\/media\/38"}],"wp:attachment":[{"href":"https:\/\/blogs.oregonstate.edu\/mutex42\/wp-json\/wp\/v2\/media?parent=35"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.oregonstate.edu\/mutex42\/wp-json\/wp\/v2\/categories?post=35"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.oregonstate.edu\/mutex42\/wp-json\/wp\/v2\/tags?post=35"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}