Oregon State University|blogs.oregonstate.edu

Crazy Ivan!

  January 28th, 2022

(For anyone who may be wondering, a “Crazy Ivan” is a maneuver for adjusting the direction of a ship.  It became part of the broader pop culture after a reference in the movie Hunt for Red October.)

This week we learned that we would need to make a bit of a course correct in our project plan based on input from our sponsor (hence the title of the post).  We had met several times to discuss the project and we were aware that our work would have to interface with existing code, which we received last weekend.  However, when we reviewed the existing code, we realized that it functioned a bit differently than we had anticipated, which helped to surface some larger gaps in expectation.  As such, we held a quick meeting with our sponsor on Wednesday to confirm our assessment and discuss the path forward.

What is Changing?

Our initial understanding of the project was that we would seek to algorithmically identify a small group of securities which would be suitable for trading based on a momentum strategy.  To this end, we were planning to develop functions which would measure volatility and momentum based on the preceding time period, which would in turn be used for predicting the relative strength of each asset during the upcoming period.  From this data, we understood that existing code would construct a portfolio and initiate Buy/Sell orders to implement the selection of a portfolio to be bought-and-held for a period of months.  However, when we received and reviewed the existing code, we realized that it functioned a bit different than we anticipated, which surfaced some other gaps.

First, the intent of our project code is to select a single volatile security with increasing momentum from a specified universe of candidate securities.  The function of existing code will be to buy and sell this specified security repeatedly in response to shorter-term swings in momentum based on a daily level of resolution.  Thus, our code might be run once monthly to update the selection of the preferred asset for trading.  However, once the asset has been selected, existing code may buy and resell shares in response to momentum fluctuations multiple times before the asset selection is run again (as opposed to our initial understanding that the portfolio would be bought-and-held until the next selection cycle).

Second, our code will also need to incorporate its own hypothetical performance from previous periods based on different asset selections as inputs to the logic of selection for the upcoming period.  These will be additional to the volatility and momentum indicators that we were already planning to use as inputs, but this complicates the problem statement because we will have to run back tests for all assets in the defined universe and all time periods which we intend to consider as inputs for our selection algorithm.

How Will This Affect the Project?

Below is the structural plan for the solution that we planned to implement.

Figure 1 – Planned project structure

To elaborate upon the Security Scoring Module from the diagram above, following is a high-level overview of the logical process which we had envisioned:

Figure 2 – Planned implementation of Security Scoring Module

However, based on a review of existing code and the meeting with our sponsor on Wednesday, we will need to make two adjustments to how the application will work.

First, instead of just initiating one-time buy/sell signals to update the portfolio each time we choose a new security for trading, the Trade Signal Generator from Figure 1 above will actively generate multiple transactions based on a given security selection (before the next update in the security selection).  Disruption to our plans will be fairly minimal since this is already implemented in existing code which will be downstream from our area of focus.

The second adjustment will be a bit more disruptive.  Aside from the user-defined configuration, the logic depicted in Figure 2 could be implemented solely on the basis of historical data from a third-party source.  However, based on the call with our sponsor on Wednesday, we now know that we will need to incorporate inputs based on the hypothetical past performance of the algorithm itself, based on various asset selections.

Why Does This Complicate Things?

Hypothetical performance of an algorithm can be measured by “back testing,” which refers to the process using historical data as inputs to simulate how the algorithm would have behaved if it had been implemented in the past.  This approach has limitations (such as susceptibility to overfitting and past data’s limited ability to predict the future), but it is often the best indication of likely performance that one can get without paper trading or actual trading in a live environment.

We always intended to back test algorithms that we were considering for this project using QuantConnect Lean to get an indication of their possible performance.  QuantConnect is designed to efficiently implement back tests and provide relatively detailed data about how the algorithm would have performed under the conditions specified.  However, we did not anticipate incorporating back tests of multiple hypothetical cases as inputs for our selection logic.  Doing so adds a layer of complexity because we are unaware of any way to efficiently trigger a suite of back tests and collect results for each using our code.

What is the Solution?

Given enough time, we could create our own code to simulate the back testing functionality in QuantConnect.  However, since the primary focus of this project is not to reinvent existing software, we agreed with our sponsor that it would be a more valuable use of our time to collect results from a defined suite of back tests manually, save these in a table and then import this data into our algorithm for reference as needed.

What did we Learn?

Frankly, this was mostly a success story.  Although our team did not correctly understand all requirements for the project initially, we did realize there could be some confusion, had requested to review the existing code as a means of identifying any gaps, and proactively discovered the gaps early in the project.  Our first lesson learned should be to always model this behavior of being attentive to possible expectation gaps and driving to clarity as quickly as possible.

One root cause of the miscommunication was an imprecise/inconsistent use of specific terms from the Finance industry.  It merits noting for future reference, that this can be a common source of miscommunication, especially with a group is relatively new and stakeholders have not had an opportunity to align thoroughly on their use of jargon.

Print Friendly, PDF & Email

Leave a Reply