Categories
Uncategorized

Wrapping Up Winter

As I approach the end of my Capstone’s development process for the Winter term, I’ve found that I’ve grown quite a bit in the last three months. There are many obvious areas for individual growth when working on a software project. You can become a real code wizard, you may have learned quite a bit about architecture, maybe you discovered the secret to learning any data model no matter how convoluted.

In my case, I learned a great deal about operating with a team. There is a certain magic in many individual contributors coming together to accomplish a task larger than any one of them could do on their own in 10 weeks. In our case, the greatest hurdle came when each of us needed to dive into a huge, unknown enterprise codebase.

We quickly realized that the crew would need to be split into a pair of frontend and backend teams. It was awesome to share a GitHub and Discord channel and to see the activity of both teams as we clipped along towards our respective goals. On the backend, we played to one another’s strengths, I developed an API-endpoint, Matt created an Angular Service, and Miguel began work on an automated data pipeline.

Work on the Django API and Angular Service went quickly, finished in roughly two or three sprints. The greatest hurdle unexpectedly turned out to be simply downloading an Excel spreadsheet from Dropbox! Luckily, Miguel eventually uncovered a Python library for browser actions which saved the day and I was able to implement that first crucial step in the process.

Next term, things are looking very promising for our project! We’ll be happily polishing, documenting, and testing everything we built during the last couple of months.

– Robin

Categories
Uncategorized

Wrangling Django

The Backend

Wrestling with the source code of AgBiz Logic has been a phenomenal experience in discovering the structure of a big, foreign codebase. The first few spelunking expeditions into the project were akin to wandering around the New York subway system with a blindfold on. I attempted to trace information as it wound its way through the Django backend, across multiple modules, models, serializers, views, and templates.

Slowly, and after quite a few meetings with our project leader, a method to the madness began to take shape. Django lets developers implement quite robust versions of the classic Model-View-Controller (MVC) design pattern. The framework itself puts a bit of a spin on MVC though, which took me some time to untangle. Rather than using a Django ViewSet as a method of displaying content, the framework uses Templates to serve up information that should be displayed. The ViewSet, instead, becomes a type of controller, wherein the logic for interacting with the application’s database is housed. This confusing relationship often leads to Django’s design paradigm being labeled as Model-View-Template (MVT).

Understanding this structure, and how the project’s AngularJS frontend can make API calls to Django, was key to my recent development progress. Currently, I’m working on a ViewSet function which queries the AgBiz database for agricultural chemicals used in a given farm plan. While it sounds simple enough, the next great hurdle will be to come to grips with the database schema. Then, working with the Models in the project, I will be able to query my way to the desired information.

The chance to dive headfirst into a fully functional, industry software project has been wonderful. Though completely overwhelming at times, the experience will help to further hone my burgeoning development skills.

Categories
Uncategorized

On Clean Code

Writing clean code is important for so many reasons. Developers should adhere to language style guidelines and attitudes of “less is more” wherever possible. Clean code improves the readability, maintainability, and extensibility of our code.

Below, the reader will find two helpful articles for writing awesome code.

  1. PEP-8: A Style Guide for Python Code

PEP-8 is the official style guide for writing code in Python. The standard is kept up-to-date by the Python community and is updated with time as new code conventions come up. The section, A Foolish Consistency is the Hobgoblin of Little Minds, should be especially important for those of us who reside on the more compulsive side of life. These brief guidelines encourage developers to avoid constraining oneself to rigid style rules that do not fit within the code’s local context.

Importantly, the authors of PEP-8 note: “do not break backwards compatibility just to comply with this PEP”. When I reopen an old file, the first thing I do is start refactoring. I’ll spend an exorbitant amount of time updating all the code in the file to my latest style craze simply because I’ve decided that indents of 4 spaces are better than 2.

For the new year, this is a habit that I’m committed to avoiding. I’ll do my best to follow the style hierarchy laid out in PEP-8.

From most to least important:
1. Consistency within a function
2. Consistency within a module
3. Consistency within a project

2. Code Smell by Martin Fowler

This short article opens with a poignant statement from Kent Beck,

“A code smell is a surface indication that usually corresponds to a deeper problem in the system.” 

Fowler uses terribly long functions as an example of a bad code smell. In conjunction with the quote above, a ridiculously long function is probably indicative of developers not keeping code purpose built and on task.

It’s easy to get carried away with a function. Our minds can hold a very organic and non-regimented idea of what a program should be able to accomplish. Computers, on the other hand, have the ability to be extremely granule! Almost every computational process you can think of will be optimized by catering to this modular aspect of the computer. Rather than writing a sprawling mess of a function, growing like a slime mould, developers should strive for precision in the creation of their functions.

Categories
Uncategorized

Coastal Code

Some nice swash at Cobblestone Beach, Newport OR.
Photo on film by Kelsea De Filippis
https://www.kelseadphotography.com

The beach is a fascinating place for me. In Oregon, the coastline is comprised of long, wide stretches of fine sand bookended by misty, basaltic headlands topped with windswept trees. It’s a wild place where tall cliffs are battered yearly by winter waves standing taller than 20 feet. In the summer, surfers and beachgoers alike flock to those same headlands for shelter from the constant, driving northerly wind.

When I first started surfing, I was around 12 years old. Dragging my board into that feral ocean felt like stepping into a mosh pit. On the “inside”, the place where broken waves roll towards the beach, objects like small surfers are tossed about with abandon. Strong rip currents tug and push and coerce you in directions that you’re not interested in going.

All in all, a pretty intense spot without an easily discernible pattern. However, as you gain experience in that arena, you begin to notice things. Swell typically moves from deep water to shallow, where eventually friction on the bottom of the wave slows it enough for the wave to “trip” over itself, tumbling in a powerful, foamy white bore towards the beach. That’s generally the first wave a surfer ever catches when they’re learning to stand on a surfboard.

Basic nearshore zones. Article written by Job Dronkers.
https://www.coastalwiki.org/wiki/Nearshore_sandbars

When the turbulent foam ball runs all the way from the ocean up onto the beach and then retreats once more, it’s called swash. This action is a vital mechanism which helps to shape the coast itself. As waves move up and down the beach over and over, sediment is shoved about, destroying and forming new sandbars, eroding cliff sides, or rolling rocks around.

In my case, the swash zone provided an unexpected opportunity to marry the beach and software. Last March I reached out to a coastal engineer in OSU’s College of Earth, Atmospheric, and Oceanic Sciences (CEOAS). At the time, it felt unlikely that a someone studying the nearshore environment would need a computer science student such as myself. Turns out, they did have a project for me to work on.

A number of years back, a group of researchers developed a program called MatPIV using the MATLAB programming language. The goal of the program is to conduct a technique called Particle Image Velocimetry (PIV). The researcher at CEOAS uses this script to analyze footage of swash as it races up the beach and back down again. From the resulting data, they can determine the speed of the wave.

Example results from JuliaPIV (left) and MatPIV (right).

MatPIV is a wonderful piece of software for this particular application but it’s terribly slow. To process a pair of images measuring 2048 x 3072 pixels, MatPIV runs for 14 to 16 seconds. 15 minutes of footage running at 40 frames per second results in 36,000 frames. To process this many frames using MatPIV could take a little over 6 days!

As any aspiring developer knows, computational cost is an important part of building robust software. To address this issue, I was assigned the task of porting this program from MATLAB into a different, more nifty scientific language called Julia.

Right off the bat, things got tricky. I had never touched either of the languages, so I needed to spend some time familiarizing myself with the ins and outs of each in turn. As I picked apart the syntax, I began to rewrite the 1000+ line MatPIV into what I call (just in my head) JuliaPIV. Some of the time I got lucky. Many segments of code had a very similar implementation in Julia. Oftentimes, I had to find new ways of doing the same thing as MatPIV. Some particularly tricky sections come to mind: interpolating data, running Fast Fourier Transforms (FFT), discovering multiple maximum values in a matrix, and determining the median of a set of complex numbers.

As I gained knowledge and understanding of Julia and the PIV algorithm, I began to find ways to optimize the original code and cut out less than useful pieces. Although I was often stuck, the pleasure of making breakthroughs, learning, and working on a piece of software meant to be used in my favorite biome of all time made it all quite rewarding.

In the end I’m pleased to say that JuliaPIV typically benchmarks at 5-7 seconds per frame pair. A gain of more than two times as fast as the original program.

For next steps, we hope to bring the code over to the GPU side of things. CUDA has introduced a real learning curve, but after my success with the CPU version of JuliaPIV, I feel certain that I can find my way to a solution.

Keep an eye out for my next blog post,

Robin

Robin and Cheryl in their favorite environment on the planet.

Categories
Uncategorized

The Great Capstone Unveiling

Source: Alabama Department of Agriculture and Industry

A few weeks have passed since my last blog post. In that time, quite a bit has happened on my capstone journey! Most importantly, I had the good fortune of landing a project centered on two things that I hold near and dear to my heart: the environment and food.

For the next 8-9 months, I’ll be working with a local company called AgBiz Logic which was founded at my university . This company has built and continues to build an enterprise level suite of decision making tools for agriculturalists. The tools range in function from financial budgeting to environmental accounting. While both tasks can generate headaches in a flash, the latter is infamously gnarly and increasingly important.

As discussed in this New York Times article, true cost accounting is certainly on the radar for the well studied economist. However, while most farmers care deeply about the health of their crops and land, it’s probably a stretch to assume that all farmers have the time and willingness to dive deep into the hidden costs of their industry.

This is exactly the kind of thing that software can help with. Using AgBiz Logic’s platform, farm workers can specify their plan and any alternatives that they have access to. Those plans can then be compared by many values, such as predicted profit or carbon production. These tools allow farmers to fine tune both their economic and environmental impacts, making it easy to find a way to balance the two.

At the moment, there is no way for AgBiz Logic to account for the environmental impacts of the chemicals their clients are using. That’s where my team and I come in!

We’ve been tasked with the design and development of a section of the AgBiz Environment module. On a high level, we plan to incorporate a metric developed at Cornell University called the Environmental Impact Quotient (EIQ). This will give users the ability to compare the impact of individual chemicals and entire agricultural plans in aggregate. The metric itself takes into account the negative effects of a given chemical on the environment. You might be wondering how scientists can possibly come up with a single number to evaluate that. Well, the formula in all its glory is shown below.

“…where DT = acute dermal toxicity in mammals, C = chronic toxicity in mammals, SY=systematicity, F = fish toxicity, L = leaching potential, R = surface loss potential, D = bird toxicity, S = soil residue half-life, Z = bee toxicity, B = beneficial arthropod toxicity, and P= plant surface residue half-life.”  Source: The EIQ User Guide

Hopefully that cleared things up nicely for you.

If it didn’t, just know that the EIQ is a well studied metric for evaluating the expected damage a specific chemical can do to the following three parties:

  • You when you eat the food the chemical was used on.
  • Animal bystanders who live on or near the crop and soil.
  • The folks who used the chemical in the first place.

This is a very exciting opportunity for me. From the beginning of my computer science path, I’ve yearned to find ways of bending the power of computing to the benefit of the environment. For a long time, that felt like one of those goals that you come back to once you have some skills or once society changes tack. However, I’ve found out that if you keep your eye on the ball, you often wind up catching it. So here I am, getting to work on a problem that I’m passionate about by using skills that I’m passionate about. Who wouldn’t love that?

Till next time,

Robin

Photo of Robin by Kelsea DeFillipis
https://www.kelseadphotography.com

Categories
About me

Introductions

Robin excited at a butterfly exhibit in Costa Rica

Hello World! My name is Robin. If you’re reading this, you’ve found your way to my blog series all about the final stretch of my Computer Science journey at Oregon State University.  Over the next 9 months, I plan to fill this site with insights on my struggles and triumphs as I wrangle the last year of my undergraduate degree. I invite you all to join me on what I truly hope will be an exciting academic finale.

Every learning experience is best served by starting with the foundation of the subject. In this case, I suppose that’s me, so I’d better give you a little background on just who I am.

View of the Yaquina Head Lighthouse

I live in beautiful Newport on the astounding Oregon Coast. This region often transcends gorgeous and moves on to ridiculously magical. Given the good fortune of having grown up in this town, you’d think that I’d be used it by now. Instead, my appreciation of the nature surrounding my home has only deepened. I spend a happy amount of time standing around and gawking at nature.

Even though I’m chronically busy with school, work, or being slack jawed at a good view, I find plenty of time to engage in my favorite personal pursuits.

Robin in the good spot in Mexico

Surfing and traveling are easily my favorite hobbies on the planet. It’s a fantastic coincidence that the two go together like peanut butter and granola. I’ve had the chance to go on a handful of surf trips down south, and each time leaves me with a sense of lasting gratitude for the opportunity.

I’m very passionate about the ocean. However, since I’ll probably never make it as a pro surfer, I have to find other ways to express my interest in the sea professionally. For the last 6 months, I’ve been lucky to work with Greg Wilson and his colleagues in the OSU Coastal Imaging Lab. There, I get to spend time honing my programming skills on an unexpected challenge: Particle Image Velocimetry (PIV).

An example result from JuliaPIV

There will be more to come on this subject in later posts. For now, I’ll just define PIV as a computational process in which the individual velocities of particles suspended in a fluid are tracked and measured. The resulting information can be combined with LiDAR data to programmatically understand the changes in velocity and spatial positioning of an ocean wave in the coastal environment! Greg hopes to use this technique to study wave run-up on beaches, perhaps discovering insights into coastal erosion.

Hopefully this post gave you a brief idea of who I am, and the things that inspire me day-to-day. I’m looking forward to writing more posts as the year goes on.

I’d love to hear from any of my readers, so leave a comment below if you’d like to hear more about a particular topic I’ve covered. Otherwise, catch you all next time!

— Robin