Now Hiring – Translator/Options Trader

Language Translation = Stock Price Predictions?

This last week I’ve been researching different ways to generate stock price predictions using machine learning and was a little surprised to discover an overlap with language translation. These seem like very different things.

For language translation you have terms like punctuation, nouns, verbs, subordinating conjunctions (I have no idea what that is either, just whatever you do not to confuse it with conjunctive adverbs). Whereas for stock prices you have well, numbers going up and down with things like trading volume and market open, close etc. Bottom line is these two fields don’t appear to have a lot in common. How is it then that they share the same architecture of recurrent neural networks using encoder-decoders?

DALLE 2 Image: Prompt “An oil painting of a robot reading a book while drawing stock price charts.” Not exactly what I was going for, and where did the can of beer come from?

Maybe it would help if we knew what a recurrent neural network with encoders and decoders was. I can’t do much better to explain than Andrej Karparthy (Stanford Professor, OpenAI founding member, Tesla AI Director, now YouTuber?): The Unreasonable Effectiveness of Recurrent Neural Networks  but I’ll give it a shot for a brief summary here. A recurrent neural network (RNN) is similar to your everyday neural network with a bunch of nodes connected together with some input and output. However, what makes them a little different is instead of a fixed number of layers feeding from bottom to top of the neural network, RNN’s have one primary layer that just feeds back into itself. This is useful when you have data that is in a time series. For example each input word gets its own layer, or each bitcoin price time step gets its own layer. RNN’s are very well suited to ingest or produce time series data. Now for the part I find fascinating, encoders and decoders.

An encoder and decoder architecture is basically two RNN’s working together that have different jobs. The first one does, you guessed it, encoding. It takes a series of inputs, for example words or price points, and encodes them down into a vector of numbers. Just as your DNA contains the instructions and information about who you are, this little vector of numbers contains the essence of the phrase “I’m all ears” or Bitcoins latest nosedive last Friday. The second RNN, the decoder, then “decodes” or unpacks this little vector. But here comes the trick, instead of decoding the compressed vector that represents “I’m all ears” back into English, instead we will unpack or decompress it into Japanese “ぜひ聞きたいです” (Using Google Translate, seemed very appropriate for this blog). And as it turns out something very similar happens with stock prices, the encoder compresses historical price information into a vector, hands it off to the decoder, which then decompresses this nugget of information into future stock prices.

DALLE 2 Image: Prompt “An oil painting of a robot handing a wrapped present to another robot.” That’s better.

So what do language translation and stock price prediction have in common? Well for one they are both sequential and well suited for RNN’s, words come one after another and so do stock prices. Also just like an original sentence is compressed and decompressed into a new language, historical stock prices are compressed and decompressed into future stock prices. Not obvious I’ll admit, but I found it fascinating that these two tasks share an underlying architecture. Also full disclosure time, a new architecture called Transformers is taking over these domains, but the Transformer revolution in AI is for another blog.

Print Friendly, PDF & Email

Posted

in

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *