Oregon State University|blogs.oregonstate.edu

Google Video and Perspective on Development Team Dynamics

  January 21st, 2022

Our Week 2 module included a Google video entitled “The Myth of the Genius Programmer,” which really resonated with me and I found myself still thinking back to it this week.  The key underlying tenant of the discussion was that epic achievements in software development tends to result from the collaboration of high-functioning teams rather than the genius of a single individual working in isolation.  The conclusion from this revelation is that being able to improve a team’s performance is actually a more important skill for success than extraordinary individual talent.

By extension, the speakers also discussed that failure and learning are natural parts of the development process, and therefore developers should embrace and plan for both.  Personally, I could not agree more, but I would argue that persuading leadership to provide an environment in which such behavior can flourish is often the bigger challenge.  This can be particularly challenging in the context of a large organization where perverse incentives may persuade managers to act in a manner which is inconsistent with the goal of providing a healthy environment for development.

Failed tests, mistakes and questions which demonstrate a lack of knowledge are all important parts of the development process, and although concerns which prevent such behavior may represent impediments to progress, that does not mean that the concerns are unfounded.  Unfortunately, in their zeal to drive progress, raise issues or even just appear relevant, managers often take actions which perpetuate a team’s discomfort with working in a humble and transparent manner.

The Great Experiment

For the past several years I have worked for one of the large tech companies which is known for stressing results.  After repeatedly encountering teams which were reserved and risk averse, I decided to focus on providing my team the kind of environment in which I would like to work and in which I would feel free to perform my best work (although this required some steps which were not consistent with my own leadership’s initial preferences).

I didn’t employ any magic formula, but mostly it came down to being trustworthy and transparent.  Following are a few principles which I would consider to be key.

Use commitments fairly

Even skilled, knowledgeable professionals may have difficulty projecting how long a task which is new to them will take to complete.  This is particularly true in software development, where most tasks are somewhat unique and problems can be difficult to anticipate.  As a result, well intending developers regularly discover that estimates which they shared in good faith are actually unachievable.

In contrast, program/project managers tend to favor certitude, which creates tension when planning is necessarily based on inexact inputs.  Unfortunately, there is a commonly repeated pattern in which program/project managers insist on team members providing schedule estimates, challenge any estimates they believe to not be aggressive enough, and then cite any miss as a failure, indicative of poor performance.  Honest and transparent communication within a team is important, but can only be achieved if commitments are treated fairly (such as not over-representing the strength of commitments from stakeholders).

Be transparent

In all aspects, it is important for a team leader to exhibit the behavior that (s)he wants to the team to adopt.  This is particularly true with regard to transparency.  In any team, I always make a point to air any problems, issues or mistakes as openly as I possibly can.  I have heard a few opinions that this “lowers the bar” by making stakeholders comfortable with a lower level of achievement, but in my experience, this is only the case if you have hired the wrong people.  In my experience, most people want to succeed together, but the reality is that things do not always go smoothly and it is important to set the tone for a team to share such things openly without fear of judgement.

Ask dumb questions

Much of what I just said about transparency applies to this point also.  Most people don’t enjoy the appearance of ignorance, yet when working on something new, it is completely normal to not have a high level of knowledge about some of the subject matter.  As such I always make a point to ask some fairly elementary questions that I think I already know the answer to.  These are mostly tailored to ensure that everyone on the team hears the answers and has an aligned understanding of the problem at hand.  These types of questions also set a tone which makes it more comfortable for other stakeholders to raise questions they might otherwise find embarrassing, and with fair regularity I receive an unexpected answer which refines or corrects my understanding of the topic.

Be positive and focus on solutions

Most people want to work in a positive, collaborative environment to find solutions to problems.  However, organizational pressures can tempt otherwise well-intended people to become harsh, critical or blamey, particularly when they themselves are uncomfortable or self-conscious about some issue.  One of the best ways to avoid this mentality is to always focus the conversation on solutions.  Discussion of mistakes or failures is only productive to the extent that it provides some learning which helps us understand the problem to be solved.  When people are confident that they will not be attacked for missteps, they tend to surface problems faster so that the entire team can work on them and they waste less time on trying to defend themselves.

Recognize achievement

It seems simple, but many managers fail to recognize the achievements of the team and its members.  This recognition is important, particularly in that it also demonstrates the importance of stakeholder(s) success.  Where possible, I tend to communicate failures as a team failure without calling out specific individuals to an outside audience.

Results of the Great Experiment

Overall, results exceeded my expectation.  Initially team members seemed surprised or even suspicious when I introduced a way of working together which was different from what they had seen previously.  However, after a few weeks, virtually everyone embraced the approach enthusiastically.  Meetings took on a super-positive and supportive tone in which people stopped blaming other stakeholders for issues and focused on solutions.  Everyone on the team felt a strong sense of support from each other, with the result that missteps and problems (even those which might be embarrassing in a more typical environment) came to the forefront almost eagerly.  People were no longer concerned about taking a calculated risk of failure and when a test didn’t work out, the entire team would have a good-natured laugh together then immediately fly into finding a solution.

Although it is difficult to provide a specific metric, I did note that initiatives were completed faster and with fewer misses once the team had embraced the open style of collaboration.  In one case, the team delivered a key strategic win which leadership had assessed had less than a two percent probability of success at the outset.  More importantly to me, people were free to deliver their best work with joy, and our team became one that people tended to gravitate to.

However, if I am totally honest about the results, I also have to note that our leadership decided that although the team was functioning at a very high level, we could get more out of them by taking a tougher stance to hold people accountable.  In the short term this resulted in more aggressive schedule plans, but no improvement to actual delivery.  In the longer term, this resulted in a loss of trust and a return to a mode of work resembled what it had been before I took over the team.  Which brings me back to my original point… it is great to instill an appreciation for transparency and openness in developers, but this will only succeed if we also provide an environment in which people are allowed to work in such a manner and flourish by doing so.



Algorithmic Trading Strategies with Machine Learning

  January 15th, 2022

What is Algorithmic Trading?

For millennia, people have invested capital in productive endeavors in the hope of earning a return on their investment.  Over time, standards of financial accounting emerged, which provided insight into the operation and performance of such enterprises.  Eventually markets were established in which investors could buy and sell financial assets based upon their assessments of fundamental value.  These innovations have yielded a mechanism by which entrepreneurs seeking capital may coordinate with willing investors to allocate resources in a very efficient way.

Even a few decades ago these markets were driven by mostly manual analysis of financial statements and other inputs which might reveal the fundamental value of the underlying asset.  However, the abundance of market data which is now available in real time lends itself to a plethora of automated trading strategies.  Recent assessments indicate that more than 60 percent of the total trading volume on US public markets is now driven by algorithmic trading.

The most famous form of algorithmic trading is probably high frequency trading (HFT), in which market participants seek a slight edge on competitors (often measured in milliseconds) by exploiting small, temporal inefficiencies in the market.  In order to beat competitors to these opportunities, the major players implement fairly complex rules which are often executed on platforms located as close to exchanges as possible in the interest of speed.  This has sparked a FinTech arms race of sorts as market participants compete to secure the best locations, fastest equipment and most efficient algorithms.  However, like any arms race, this obsession with fastest execution carries a certain level of risk.

Perhaps the best example of this is the Flash Crash of 2010.  At approximately 2:32pm EDT on May 6, a large trader initiated the sell of a single type of asset valued in excess of $4 billion.  This transaction was large enough to look like a trend to various algorithms, which started trading in a dynamically unstable manner.  During the ensuing 36 minutes, the market logged its second largest intraday swing in history, even though there was no rational motivation for such behavior.  In the end, more than a trillion dollars of capital was wiped out and both the public and regulators became aware of the potential risks of algorithmic trading.

Figure 1 – Dow Jones tracker from May 6, 2010

Momentum Trading

Our project will not enter the foray of HFT, but rather will seek to capitalize on a more stable strategy of momentum investing.  In short, momentum trading recognizes that stocks which have been outperforming the market during the past several months, typically tend to do so over an ensuing period of similar duration.  Similar logic applies to stocks which have been underperforming the market as well.  In truth, the correlation only tends to prevail a little more than 50 percent of the time, but the distribution of performance tends to be skewed towards larger winners than losers, which often leads to momentum strategies out performing the market overall.

Several studies have shown that momentum investing is one of the few strategies which regularly delivers better returns than the market as a whole, although there are multiple opinions and no consensus about why the viability of this method persists. According to the efficient market theory, once a market bias is understood, it should be exploited and thus disappear, but this has not been the case with momentum-based trading. Several theories about the continuing success of momentum methods seem to revolve around persistent behavioral biases of market participants.

Universe Selection

Trying to apply optimization strategy to the market as a whole can be dauting at best and untenable at worst due to the sheer volume of potential investments available.  As such, it is typical to apply some coarse screening criteria to identify a manageable quantity of qualified investments which merit further analysis.  This group of pre-qualified assets is considered to be the investible universe, and deeper analysis will be applied to these assets in order to generate a list of trades.

Our project will focus predominantly on universe selection, and will integrate with existing modules which generate buy and sell signals for based on a broader automated strategy.  We will mostly seek to choose risky (i.e., volatile) assets which have demonstrated favorable momentum over the preceding period.  We will also seek to determine an optimal time window to evaluate, but our initial estimate (based largely on our sponsor’s experience) is that the optimal window to review will be approximately 3 to 6 months.

Asset Selection Criteria

For the first version, we will choose from roughly ten frequently traded ETFs by assessing growth and volatility over the preceding period.  Using this model, we will develop a module and integrate it with existing software which generates actual buy and sell signals for assets from the selected universe.  We will then seek more complex and nuanced logic which will provide better performance.

We will assess our code’s performance based on its ability to choose risky assets with strong returns.  For the initial version with approximately ten assets to choose from, we will check the performance over another period based on each of the ten assets available, and then see how the preferred asset (based on the criteria from our algorithm) performed in comparison to the others.  This will give a coarse, directional indication only, but we will look to tailor a more sophisticated testing regime before implementing more complex (and hopefully effective) strategies.



carlton.background

  January 3rd, 2022

I was born and raised in Tennessee.  Don’t bother asking which city, because there really wasn’t one, but I still consider Tennessee to be my main home even though I have been away for 15 years.

When I was 14, I took a drafting class because my high school required me to take a vocational course.  To my amazement, I loved it!  It is fair to say that my first efforts met with mixed results, but I had a really awesome teacher who bore with me and encouraged me to stick with it.  As a result, I wound up taking drafting every year in high school and eventually won three state titles in individual competitions.

My enthusiasm for drafting led me to study mechanical engineering, which I also thoroughly enjoyed.  I had every intention of working as a mechanical engineer, but while I was in school a new contraption called the “World Wide Web” made its debut.  (Spoiler alert… it turned out to be a hit!)  During the dotcom boom there was such a shortage of IT talent that companies were hiring anyone with a cross-trainable skill set, so straight out of college I went to work on the road crew for a company that was building out its own national network.

I thought this would be a short-term adventure (much like traveling abroad for a summer or running away to join the circus), after which I would come back to reality and find gainful employment as an engineer.  However, after 4 promotions in 18 months, gainful employment had found me.  My responsibilities were ridiculously above my abilities at that time, but with a lot of effort I was able to grow into the role reasonably well and got to experience the full ride of the dotcom boom firsthand.

Unfortunately, I also got to experience the dotcom bust as well, but the good news is that the bankruptcy court hired me to decommission and recover the entire network that we had just built.  Although less satisfying, this actually proved to be considerably more lucrative than building it had been.  (I also wound up inadvertently setting a personal record by visiting 41 US states in 40 days while we were consolidating assets).

After that, I continued working on one short-term project after another for a few years.  In 2006 I was working on a project in California, when I got a call asking if I could consult on a project in Ukraine.  I agreed, but wound up staying more than 5 years.  During that time, I helped prep a telecom company for sale, after which the new owners asked me to stay on to help launch and run the company.  It was also during my tenure in Ukraine that I began studying Finance seriously.  Collaborating with the C-suite officers on two equity events persuaded me that I needed to learn more in this domain, so I completed the Chartered Financial Analyst (CFA) program, which is generally more common for investment bankers than engineers.

In 2012 I returned to the US (with my wife, whom I had met in Ukraine) and went to work for Amazon, but we still try to go back to Ukraine each year and just bought a dacha (i.e., “country house”) outside of Kyiv.  With Amazon, I spent 5 years setting up fulfillment centers in the US, Canada and Australia, which had very little relevance to my prior experience.  In that role I mostly coordinated construction, conveyors, racking, robots and so forth.  This has been followed by 4 years setting up data centers for AWS, although the hyperscale data centers that support cloud computing have limited similarity to the ones I worked in straight out of college.

When I look back on my career path to date, I am reminded of a well-intending advisor who asked me shortly before graduation what I wanted to be doing in 10 years.  I informed him that if I was doing anything that I could possibly anticipate at the time, I would be sorely disappointed.  I am happy to report that I did not let myself down since I knew nothing about wireless telecom, Ukraine or finance at the time.  As I move towards software projects, I believe I am likely to repeat this feat.  Advanced skills in computing can be a passport to virtually any domain imaginable.  Energy projects, economics, robotics… once again I have no idea what projects I may be working on in 10 years.

But I’m eager to find out.