Categories
Blog

Project Blog – 6

When selecting a project for my course, I decided on “Cloud-Based Algorithmic Trading Strategies for Individual Investors” primarily because I wanted to learn more about investing. My primary motivation wasn’t to pursue a career in finance but rather to enhance my personal financial knowledge. Given the various projects available, this one stood out as the most difficult, making it a potentially strong addition to my resume. I believed that working on a large-scale project in a structured environment would provide valuable experience and align well with my long-term goals of financial literacy and technical growth.

Unfortunately, this project has not met my expectations. I initially envisioned it as a significant undertaking; one that I could treat almost like a full-time job. However, the reality has been quite different. The project turned out to be relatively small in scope, lacking the depth and complexity I had hoped for. This has been a source of severe disappointment, as I had anticipated a more demanding experience.

Upon reflection, I regret not choosing to work independently on a project of my own design. Doing so would have allowed me to dedicate my time to topics that align more closely with my true interests, particularly physics and mathematics, rather than finance and investing. While I recognize the value of algorithmic trading knowledge, my passion lies in areas that explore fundamental mathematical principles and scientific concepts. Had I followed my instincts and pursued a project in one of these fields, I believe I would have found the experience far more fulfilling.

This experience has reinforced an important lesson, while resume building is important, true satisfaction comes from working on projects that genuinely interest you. In the future, I will prioritize projects that align with my passions rather than selecting them solely based on external factors like career applicability.

Categories
Blog

Project Blog – 5

In working on the cloud-based algorithmic trading application. My focus as been on the data acquisition and data formatting services. Ultimately, this has little to do with investing / finances, and more so with handling large amounts of data. I have integrated 4 technologies into my services, two for acquiring and two for transforming the data.

Technologies

In gathering stock market data I have used two data sources, Yahoo Finance and Alpaca. Of the two, Alpaca has more reliable data when it comes to recent data, making it my go to data service. Both services are easy to use and integrate into any system requiring stock market data. The difficulties in using these technologies is more so in combining them. In certain situations, Alpaca is not as reliable. Therefor, I must combine the data I obtain from both data sources to create a more reliable output for the stock market data I need. Yahoo Finance has been my least favorite technology thus far. If I could change Yahoo Finance, I hope to be able to structure the data better once obtaining it, as it is a blob of data and does not have any typing.

In looking into different data sources, I have used the Polygon API, Interactive Broker’s API, and EOHD’s API. All of these are easy to implement as well, but the difficulties mainly come in the design process. These data sources do not offer free data collection for large amounts of data. Most of them required payments upon requesting data under 100 times in 1 month. So in designing my program, I thought about creating a database to reuse collected data, but that would be much more costly for the overall project.

Moving past data collection technologies, I have had to cleanse, format, and interact with data using libraries focused on handling data. Pandas and NumPy were my chosen technologies. By far NumPy was my favorite library as I am a more math orientated person and enjoy performing math operations. I hope to use NumPy in my future projects that are more related to rigorous calculations. Pandas was easy to use as well and made formatting my data easy as I needed less than 10 lines of code to format my data in any way I want.

Overall, I researched numerous libraries and technologies I can use in my project. I am happy with my choices and would use the same libraries again as they are easy and efficient.

Categories
Blog

Project Blog – 4

In exploring “Clean Code” practices, I’ve discovered that these principles are not a one-time lesson but a continuous journey. Clean code concepts must be revisited regularly by all programmers to maintain and improve the quality of the software they develop. Neglecting these practices can lead to technical debt and reduce productivity over time.

Long Method Smell

One of the critical concepts from Martin Fowler’s Refactoring: Improving the Design of Existing Code is the idea of identifying and addressing the “long method smell.” This refers to methods or functions that grow too lengthy, becoming hard to follow. Long methods often lead to poor readability and increased complexity, making them a prime candidate for refactoring.

I’ve noticed that this can happen to me when I’m developing complex functions that perform a specific task but involve multiple mini-processes or intermediate steps. In these cases, the method might attempt to do everything at once, resulting in a dense block of logic that can be challenging to understand and maintain.

The solution to this is modularization—breaking down the large method into smaller, more focused micro-functions, each responsible for a single task. These smaller functions can then work together to accomplish the larger goal of the original method. This approach not only makes the code more readable and maintainable but also allows for better reuse of individual components, easier debugging, and simpler testing.

Categories
Blog

Project Blog – 3

As I continue my capstone project, Cloud-Based Algorithmic Trading Strategies for Individual Investors, I’ve made significant progress in areas like data acquisition, data transformation, and team collaboration. Each milestone has taught me valuable lessons about software development, investing, and the importance of effective teamwork.

Data Acquisition

One of the key components of our platform is Data Acquisition, and it has been both challenging and rewarding to integrate multiple sources for stock market data. I decided to use Alpaca’s Market Data API for its seamless access to both real-time and historical stock data, and Yahoo Finance as a secondary source to fill in any data gaps and provide additional flexibility for historical analysis. Both of these options are free, in the future, I hope to use more efficient and reliable options.

In developing the live market data collection, I found that Alpaca’s API offers REST and Websocket methods for gathering market data. I had to make the decision to go with the REST option as it better fit our algorithm’s overall strategy and does not have the limit of 30 symbols the Websocket method has.

For gathering historical market data, primarily used in backtesting and displaying stock market data in the frontend, I used Alpaca and YahooFinance. Both have their strengths and weaknesses in certain areas. I had to develop my services so that I stayed under the API call limit but still offered support for as many stock symbols as needed. Additionally, I had to integrate the data of both technologies, although they were initially formatted differently.

The process of working with multiple APIs has taught me the importance of balancing data accuracy, speed, and cost. Each API has its own limitations and quirks, and managing these effectively has been essential to maintain data reliability while staying within API rate limits.

Data Transformation

Once the data is acquired, it flows into the Data Transformation module, where it is processed, cleaned, and prepared for analysis. Here, Python’s pandas library has been my go-to tool for handling large datasets and transforming raw market data into actionable insights.

I’ve gained a deeper understanding of how to:

  • Clean and normalize data to ensure consistent formats across different APIs.
  • Handle missing or incomplete data, an inevitable challenge when dealing with financial markets.
  • Optimize data pipelines to reduce processing time and ensure real-time data is ready for the Trade Signal Generation module.

This experience has significantly improved my ability to work with large datasets in Python and has stressed the importance of clean, well-structured data in algorithmic trading.

Team Collaboration

One of the most challenging aspects of working on this project has been team collaboration. I feel this is my teams biggest struggle currently, and something I hope to continue to improve as we progress through this project.

Looking Ahead

As we move forward, I’m excited to see how our platform evolves. The next steps will involve refining our algorithms, enhancing the user interface, and preparing for live testing. I’m particularly looking forward to integrating our Data Transformation module with the Trade Signal Generation system, which will mark a significant milestone in the project.

Categories
Blog

Project Blog – 2

As I progressed through my capstone project, I have been working on “Cloud Based Algorithmic Trading Strategies for Individual Investors.” Which has allowed me to learn more about investing strategies and software development. This project aims to bring algorithmic trading to individual investors through a modular cloud-based platform.

Investing Concepts

Algorithmic trading has provided an introduction to investing and trading strategies. Our platform empowers individual investors with tools usually resevered for professionals. For example, understanding concepts like “drawdown” has been crucial. A drawdown represents a portfolio’s peak-to-trough decline during a specific period and helps measure the risk level associated with a particular strategy. The goal of minimizing drawdown has guided a lot of our design, especially in areas like trade signal generation and backtesting.

In addition to learning about risk, I’ve gained insight into strategies like the barbell method, which divides investments between low-risk and high-risk assets. The barbell strategy influences our platform’s functionality, as it helps investors balance their portfolios. Learning these concepts will help in the future as I progress in developing my own investing strategies.

Building an Algorithmic Trader

Developing the algorithmic trader required integrating various financial APIs, each serving specific data requirements. There are numerous options, many ranging in prices, with some going for thousands of dollars per month. Our design for this project utilizes free options that are reliably accurate for most of the required metrics.

Options that had to be considered were how frequent data would be collected, either via a Websocket API or a simple REST API, both coming with advantages. For high frequency trading, a Websocket API is required. However, our platform is more focused on day trading with less frequent data requests. We also utilize several APIs to ensure our data is accurate and reliable for what it is being used for.

This modular design simplifies data management and improves performance, as each component can operate independently. For example, during the backtesting phase, we can simulate trades on historical data to analyze algorithm performance without impacting real-time trading. This setup also enables the optimization of trading strategies based on past data, helping us refine algorithms before deployment. Each API has its limitations and constraints, so managing these efficiently has been crucial for ensuring data accuracy and staying within API rate limits.

Designing System Architecture

The project has also allowed me to develop my skills in system architecture and requirements gathering. From the beginning, we divided the platform into six core modules: Data Acquisition, Data Transformation, Trade Signal Generation, Order Execution, Post Processing & Reporting, and User Interface & Interaction. Each module is responsible for specific aspects of the trading workflow, from pulling in real-time market data to executing orders and generating performance reports for users​.

Creating a design document for the project was a significant step. It serves as a roadmap for development, detailing each module’s purpose, the technologies used, and the data flow between components. This documentation is essential not only for our current team, but also for any future developers working on this project. I’ve learned how valuable it is to clearly define requirements upfront, as this has allowed us to anticipate challenges and design a system that is flexible and scalable.

Leading the Team

Working on this project has given me the opportunity to take on a leadership role, and I’ve found that delegating tasks effectively is just as important as contributing directly to development. Some of my team have yet to contribute to the project, whilst others require guidance to complete work. As the team member responsible for data acquisition and transformation, I’ve coordinated with others working on areas like trading signal generation, backtesting, and user interface design.

My experience leading this project has improved my communication skills and my ability to make decisions that benefit the project as a whole. I’ve managed to strike a balance between guiding the team and handling core development work myself. This experience has been invaluable, especially for a project of this complexity, where a clear direction and efficient collaboration are crucial for success.

Conclusion

Working on “Cloud-Based Algorithmic Trading Strategies for Individual Investors” has been a transformative experience. I’ve gained hands-on experience with investment strategies, learned the intricacies of designing an algorithmic trading platform, and developed leadership skills that I’ll carry forward into future projects. I am excited to continue working on this project and developing my software development skills further.

Categories
Blog

Project Blog – 1

This blog will showcase my journey throughout my senior project at OSU. I hope to develop a project that can display all my new found knowledge I have acquired at OSU.

About Me

I live in Oregon but visit Las Vegas constantly because I grew up there. Currently, I am a tutor but hope to change that soon with an internship. Which is why I believe this project is extremely important to me.

Initially, I started college with absolutely no idea as to what degree I wished to complete. Over time, as I progressed through college I found myself enjoying computer science and physics courses the most and wish to complete my computer science major first. Later on, I hope I can get a bachelors or masters in physics, as I would enjoy working on quantum computing development.

However, my goals and plans are not set in stone, and are constantly changing so I hope this project and my future experiences can help me fully decide as to what career path I wish to follow.

Projects

I find it difficult to find the top projects I am interested in, but the ones listed below stuck out to me.

Improving Healthcare Cost Transparency Using AI: This project utilizes a technology that I wish to learn about, AI. Additionally, this project idea focuses in on providing assistance to everyone through educating them on health related information.

Engineering Simulation: I enjoy physics, graphics, data analytics, cloud development, and web development which are all apart of this project.

Creating Portland’s Open Data Digital Commons: I wish to better my skills working with ‘big’ data. I believe this project closely align with many software engineering internships and jobs descriptions, so this project would be beneficial for my resume.