Oregon State University|blogs.oregonstate.edu

Deep Learning – Neural Networks, Take 2!

  February 4th, 2022

The goal of our project is to identify the best security to be traded using an existing trading engine.  In short, the key deliverable from our project will be code which selects the best of 18 potential leveraged ETFs to trade in over the ensuing month (based primarily on volatility and momentum).  The existing momentum-based trading engine will then initiate trades based on shorter-term variations in momentum indicators.

To this end, we intend to rely upon machine learning methods, potentially including deep learning.  Deep learning is a technique which has gained enormous momentum in adoption during the past five years.  It is based on neural networks, which are patterned after the learning methods of the brain to an extent.  I find this interesting because I remember neural networks being the cause of excitement during the 1990s, after which they fell out of favor for an extended period of time.  In fact, I once proposed such a solution to a mechanical engineering problem when I was completing my first degree, to which a professor whom I respected a great deal responded “what you’re describing is a neural network, and they don’t work in practice.”

As such, it was a bit of a revelation to me when I discovered a year or so ago that deep learning is essentially a modern incarnation of methods which were previously dismissed by a very competent community.  I have held an interest in learning about deep learning for quite some time, but this will be the first time that I have had an opportunity to get any hands-on experience.  Now that I am beginning preparation for an implementation, I have been wondering why neural networks feel out of favor and how there has been such a resurgence with deep learning recently.  Following is quick summary of what I have found.

What is a neural network?

To greatly oversimplify, a neural network is mesh of nodes which are interconnected with weights and thresholds.  When the input of one network meets the specified threshold, it triggers an output to nodes in the subsequent layer.  A neural network must be comprised of at least three layers (an input, and output and additional intermediate layers which perform processing.  The name harkens to the fact that this architecture mimics the processing structure of biological neurons in the brain.  The benefit of a neural network is that it is flexible and can be applied to a wide range of image/pattern recognition patterns.  With repeated use and exposure to more data, the network can be optimized (i.e. “learn”) to perform better.

Why did neural networks fall out of favor?

The general concept and approach of neural networks has not changed so much since the last century.  This begs the question, why did they fall out of favor then and why the resurgence now?  The simple answer is that several prerequisites for widespread adoption have only been met within the past 20 years.

One factor is the adequacy of available resources, particularly process and storage.  Deep learning applications are compute-intensive, hence processors only reached a performance and cost point which would lend itself to widespread application within the past 20 years.  The recent wave of virtualization (including cloud computing) has further accelerated this trend.

A second factor is the extent of data available.  Widespread adoption of internet and the world wide web has driven creation and ingestion of a volume of data which was virtually unimaginable in the 1990s.  Moreover, through its intended use, much of this data is prequalified in such a way that it can be readily used for training and testing models.  Moreover, it has made possible the adoption of various applications which can benefit from machine learning, thus creating both the opportunity and demand for neural networks.

What triggered the recent explosion in deep learning interest?

The key development most responsible for the surge in deep learning’s popularity was the success of AlphaGo.  Go is a game in which two players alternate turns placing colored stones on a gridded game board with the goal of surrounding their opponent’s stones or areas on the board.  Although conceptually straightforward, the game has a vast number of potential permutations and requires layers of strategy which far exceed those of chess.  Although computers have been competitive with the best human players for quite some time, most experts did not believe that a computer would ever be able to achieve a sufficient level of intuition to defeat a human player.

AlphaGo achieved this feat using a deep learning approach, with which it decisively defeated the current European champion in 2015.  This accomplishment was topped the following year when an updated version defeated the original in 100 consecutive matches.  This was particularly noteworthy because the updated version relied upon unsupervised learning.  Collectively these two achievements highlighted the potential of a previously discarded technique, thus triggering a virtual gold rush for adoption and implementation.

It is interesting to note that although the metrics on which decisions were made are recorded and available to researchers, not all of them are fully understandable by humans.  This is also true of other problems (such as protein folding) which have successfully incorporated deep learning.  Thus the implication would seem to be that this technique has enabled computers to tease out certain actionable patterns which are currently beyond our ability to comprehend.

Print Friendly, PDF & Email

Leave a Reply