{"id":14,"date":"2023-01-25T06:26:57","date_gmt":"2023-01-25T06:26:57","guid":{"rendered":"https:\/\/blogs.oregonstate.edu\/kendevoe\/?p=14"},"modified":"2023-01-25T06:26:57","modified_gmt":"2023-01-25T06:26:57","slug":"now-hiring-translator-options-trader","status":"publish","type":"post","link":"https:\/\/blogs.oregonstate.edu\/kendevoe\/2023\/01\/25\/now-hiring-translator-options-trader\/","title":{"rendered":"Now Hiring &#8211; Translator\/Options Trader"},"content":{"rendered":"\n<p>Language Translation = Stock Price Predictions?<\/p>\n\n\n\n<p>This last week I\u2019ve been researching different ways to generate stock price predictions using machine learning and was a little surprised to discover an overlap with language translation. These seem like very different things.<\/p>\n\n\n\n<p>For language translation you have terms like punctuation, nouns, verbs, subordinating conjunctions (I have no idea what that is either, just whatever you do not to confuse it with conjunctive adverbs). Whereas for stock prices you have well, numbers going up and down with things like trading volume and market open, close etc. Bottom line is these two fields don\u2019t appear to have a lot in common. How is it then that they share the same architecture of recurrent neural networks using encoder-decoders?<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"1024\" src=\"https:\/\/osu-wams-blogs-uploads.s3.amazonaws.com\/blogs.dir\/6431\/files\/2023\/01\/DALL\u00b7E-2023-01-25-14.49.59-An-oil-painting-of-a-robot-reading-a-book-while-drawing-stock-price-charts.-1024x1024.png\" alt=\"\" class=\"wp-image-15\" srcset=\"https:\/\/osu-wams-blogs-uploads.s3.amazonaws.com\/blogs.dir\/6431\/files\/2023\/01\/DALL\u00b7E-2023-01-25-14.49.59-An-oil-painting-of-a-robot-reading-a-book-while-drawing-stock-price-charts..png 1024w, https:\/\/osu-wams-blogs-uploads.s3.amazonaws.com\/blogs.dir\/6431\/files\/2023\/01\/DALL\u00b7E-2023-01-25-14.49.59-An-oil-painting-of-a-robot-reading-a-book-while-drawing-stock-price-charts.-300x300.png 300w, https:\/\/osu-wams-blogs-uploads.s3.amazonaws.com\/blogs.dir\/6431\/files\/2023\/01\/DALL\u00b7E-2023-01-25-14.49.59-An-oil-painting-of-a-robot-reading-a-book-while-drawing-stock-price-charts.-150x150.png 150w, https:\/\/osu-wams-blogs-uploads.s3.amazonaws.com\/blogs.dir\/6431\/files\/2023\/01\/DALL\u00b7E-2023-01-25-14.49.59-An-oil-painting-of-a-robot-reading-a-book-while-drawing-stock-price-charts.-768x768.png 768w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p><em>DALLE 2 Image: Prompt \u201cAn oil painting of a robot reading a book while drawing stock price charts.\u201d Not exactly what I was going for, and where did the can of beer come from?<\/em><\/p>\n\n\n\n<p>Maybe it would help if we knew what a recurrent neural network with encoders and decoders was. I can\u2019t do much better to explain than Andrej Karparthy (Stanford Professor, OpenAI founding member, Tesla AI Director, now YouTuber?): <a href=\"http:\/\/karpathy.github.io\/2015\/05\/21\/rnn-effectiveness\/\">The Unreasonable Effectiveness of Recurrent Neural Networks<\/a> &nbsp;but I\u2019ll give it a shot for a brief summary here. A recurrent neural network (RNN) is similar to your everyday neural network with a bunch of nodes connected together with some input and output. However, what makes them a little different is instead of a fixed number of layers feeding from bottom to top of the neural network, RNN\u2019s have one primary layer that just feeds back into itself. This is useful when you have data that is in a time series. For example each input word gets its own layer, or each bitcoin price time step gets its own layer. RNN\u2019s are very well suited to ingest or produce time series data. Now for the part I find fascinating, encoders and decoders.<\/p>\n\n\n\n<p>An encoder and decoder architecture is basically two RNN\u2019s working together that have different jobs. The first one does, you guessed it, encoding. It takes a series of inputs, for example words or price points, and encodes them down into a vector of numbers. Just as your DNA contains the instructions and information about who you are, this little vector of numbers contains the essence of the phrase \u201cI\u2019m all ears\u201d or Bitcoins latest nosedive last Friday. The second RNN, the decoder, then \u201cdecodes\u201d or unpacks this little vector. But here comes the trick, instead of decoding the compressed vector that represents \u201cI\u2019m all ears\u201d back into English, instead we will unpack or decompress it into Japanese \u201c\u305c\u3072\u805e\u304d\u305f\u3044\u3067\u3059\u201d (Using Google Translate, seemed very appropriate for this blog). And as it turns out something very similar happens with stock prices, the encoder compresses historical price information into a vector, hands it off to the decoder, which then decompresses this nugget of information into future stock prices.<\/p>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"1024\" src=\"https:\/\/osu-wams-blogs-uploads.s3.amazonaws.com\/blogs.dir\/6431\/files\/2023\/01\/DALL\u00b7E-2023-01-25-14.42.49-An-oil-painting-of-a-robot-handing-a-wrapped-present-to-another-robot.-1024x1024.png\" alt=\"\" class=\"wp-image-16\" srcset=\"https:\/\/osu-wams-blogs-uploads.s3.amazonaws.com\/blogs.dir\/6431\/files\/2023\/01\/DALL\u00b7E-2023-01-25-14.42.49-An-oil-painting-of-a-robot-handing-a-wrapped-present-to-another-robot..png 1024w, https:\/\/osu-wams-blogs-uploads.s3.amazonaws.com\/blogs.dir\/6431\/files\/2023\/01\/DALL\u00b7E-2023-01-25-14.42.49-An-oil-painting-of-a-robot-handing-a-wrapped-present-to-another-robot.-300x300.png 300w, https:\/\/osu-wams-blogs-uploads.s3.amazonaws.com\/blogs.dir\/6431\/files\/2023\/01\/DALL\u00b7E-2023-01-25-14.42.49-An-oil-painting-of-a-robot-handing-a-wrapped-present-to-another-robot.-150x150.png 150w, https:\/\/osu-wams-blogs-uploads.s3.amazonaws.com\/blogs.dir\/6431\/files\/2023\/01\/DALL\u00b7E-2023-01-25-14.42.49-An-oil-painting-of-a-robot-handing-a-wrapped-present-to-another-robot.-768x768.png 768w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p><em>DALLE 2 Image: Prompt \u201cAn oil painting of a robot handing a wrapped present to another robot.\u201d That\u2019s better.<\/em><\/p>\n\n\n\n<p>So what do language translation and stock price prediction have in common? Well for one they are both sequential and well suited for RNN\u2019s, words come one after another and so do stock prices. Also just like an original sentence is compressed and decompressed into a new language, historical stock prices are compressed and decompressed into future stock prices. Not obvious I\u2019ll admit, but I found it fascinating that these two tasks share an underlying architecture. Also full disclosure time, a new architecture called Transformers is taking over these domains, but the Transformer revolution in AI is for another blog.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Language Translation = Stock Price Predictions? This last week I\u2019ve been researching different ways to generate stock price predictions using machine learning and was a little surprised to discover an overlap with language translation. These seem like very different things. For language translation you have terms like punctuation, nouns, verbs, subordinating conjunctions (I have no [&hellip;]<\/p>\n","protected":false},"author":13113,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1],"tags":[],"class_list":["post-14","post","type-post","status-publish","format-standard","hentry","category-uncategorized"],"_links":{"self":[{"href":"https:\/\/blogs.oregonstate.edu\/kendevoe\/wp-json\/wp\/v2\/posts\/14","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/blogs.oregonstate.edu\/kendevoe\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/blogs.oregonstate.edu\/kendevoe\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/blogs.oregonstate.edu\/kendevoe\/wp-json\/wp\/v2\/users\/13113"}],"replies":[{"embeddable":true,"href":"https:\/\/blogs.oregonstate.edu\/kendevoe\/wp-json\/wp\/v2\/comments?post=14"}],"version-history":[{"count":1,"href":"https:\/\/blogs.oregonstate.edu\/kendevoe\/wp-json\/wp\/v2\/posts\/14\/revisions"}],"predecessor-version":[{"id":17,"href":"https:\/\/blogs.oregonstate.edu\/kendevoe\/wp-json\/wp\/v2\/posts\/14\/revisions\/17"}],"wp:attachment":[{"href":"https:\/\/blogs.oregonstate.edu\/kendevoe\/wp-json\/wp\/v2\/media?parent=14"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/blogs.oregonstate.edu\/kendevoe\/wp-json\/wp\/v2\/categories?post=14"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/blogs.oregonstate.edu\/kendevoe\/wp-json\/wp\/v2\/tags?post=14"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}