We’re now approaching the end of the quarter. Projects are in full swing and the personal calendar is filling with holiday activities. Certainly an exciting time in every academic year. This also marks the end of my time at OSU. Looking back, it has gone by so quickly and from my perspective, it is staggering the amount of information that I’ve gotten to cover in the last two years.

The OSU CS program has been highly effective in the goals that it sets out to achieve and I value my opportunity to participate in it. This whole journey started with a friend off-handedly mentioning that he was in the program and that if I was interested to maybe check it out. It is crazy to think that that casual suggestion and a couple of Python tutorials sent me down the path of another degree and a drastic career change. This led to an internship with a large tech company and a subsequent offer to return full time in March next year.

I’m really looking forward to what is to come. With winter approaching, I’ve got travel plans for the holidays and a completely packed ski season in the US, Canada, France, and Austria, leading all the way into the beginning of my new career as a software engineer. All in all, things are well and I can’t wait to see what else the future will bring.

A Fall Seattle Day

It’s a rainy Thursday morning in Seattle. All last night there was a constant muffled rhythm of rain on the roof of the house. Perfect weather for a hot cup of coffee, turning the furnace up a couple of degrees, and working on some homework before heading in to work this afternoon.

A cheap economic barometer:
Peet’s coffee is usually my go-to for moderately priced, large batch, buy it anywhere coffee. The reliable workhorse. But as of this week the price has doubled since no more than two years ago. It used to be that you could buy a twelve ounce bag of Major Dickason’s for $5.99, occasionally $6.99. First the standard bag size was cut back to 10.5 ounces and over the last year and a half or so, the price has slowly risen and sale prices would become less frequent on this particular brand. But this week, I was truly shocked to see the same bag of coffee selling for 11.99 at the grocery store down the street where I’ve done most of my shopping for years. I settled for Seattle’s Best Coffee, which, it is not.

The Coming Week:
This is going to be a busy week for the project. We are at the point of working through the image collection problem. My tasks involve working on the image stitching and layout creation methods. It’ll be a fun challenge working with some tools that I’ve never used before to assemble the photos that we’re taking. To do this I’ll be using OpenCV Image stitching to merge overlapping photos. This is straightforward enough for our uses but does come with some constraints such as being sensitive to order and orientation. However, it gets challenging when considering that there may be more than one distinct group of photos that will be taken in a program run. We will need to be able to stitch together each group and then arrange them together as they are truly laid out with gaps filled with empty space in the final image. It looks like there are some tools which will help to create the image, but to derive the layout we’ll also need to use the recorded coordinates the images assign a reference point for each stitched image to specify the layout. Just a generalization of some of the initial thoughts.

Motion and Iteration

It seems like every week’s tasks have (gratefully!) included learning and applying something new in the eGreenhouse project. In the last few days this has been setting up the movement commands for the CNC controller which moves the camera and sensor module in the x and y directions along the tracks. The structure of this project and the hardware being used has been set up so that we can send movement commands in a pre-defined format. The CNC drivers have already been set up and subscribe to the GCodeFeed topic. The message format for this topic is a single string element. A controller node which accepts the strings and sends them through the specified serial port also exists. An ESP32 running Grbl is connected via USB to the port that we want to send instructions to and expects to receive new line separated G-code strings.

There are many G codes available for various processes which include movement direction, path geometry, tooling control, and the list goes on and on. For this project we are mainly interested in linear motion to make the end effector with the camera and sensors travel along a list of destinations to collect data. The two G codes for linear movement are G00 and G01. G00 is a rapid movement which just tells the controller to move to the destination as fast as possible without performing any work. G01 tells the controller to move to the destination at a specified feed rate and allows for concurrent operations such as milling or extrusion in the case of a CNC mill or 3D printer. For now I’ve been primarily concerned about sending properly formatted codes to a mock node. Since I’m working with a mock, the code at this point is arbitrary and I’ve been using G00, but ultimately this code could change.

The point of today’s post is to talk about the requirements that we have been given for sending instructions to the ESP32 and a couple of iterations I’ve gone through to get to the point I’m at right now. Our goal is for the user to be able to select a series of locations in the interface. The interface should convert those into absolute x-y coordinates measured in millimeters and store them in the database. The ROS program can then read the set of locations (referred to as waypoints), tell the controller to move to each waypoint and perform the associated action at each. Although we could send the waypoint coordinates to the controller with no intermediate locations with the end effector arriving at the specified destinations (assuming no errors), we have been tasked with splitting up the distance to provide some more flexibility like inserting additional commands into the path or interrupting the movement operation.

The first step that I took was to make the end effector mock “move.” Although it is just a mock, it stores a location and has a means of moving about virtual space. First, to make sure that the mock would substitute the ESP32, I made it subscribe to the same topic that the ESP32 will use, parse the G Code, and use its controller function to move to the specified location. This is just a simple x-y coordinate and it just increments or decrements until the current location is the destination. While the mock is “moving” it is also publishing its location to the topic that the actual ESP32 will be reporting the real location to. Success! In iteration 1, the communication channels are functioning, the mock is working as needed, and the MotionNode is telling the ProgramNode when it reaches the destination.

Next was to create some intermediate G Codes. In this step I wanted to get the destination split up into a series of steps, each 5 linear units in length. When sending codes to the actual ESP32, units will be in millimeters, but since this is just a mock, they are arbitrary. In this iteration the virtual end effector moves in the x direction first, then in the y direction. If the case were that we wanted to move from (0, 0) to (21, 14), the MotionNode would publish the destinations (5, 0), (10, 0), (15, 0), (20, 0), (21, 0), (21, 5), (21, 10), (21, 14) as formatted G Codes (G00 XX, YY, ZZ) to the GCodeFeed. Once again, success! The ESP Mock can move in virtual space from its location to the destination passing through each location published to the GCodeFeed.

Although now we’re “moving” along a segmented path, this still seems somewhat inefficient. Why should we waste the time of traveling to the user’s selected destination in an L-shaped path? Next is to move 5 cm at a time directly toward the destination. Or, in other words, time to throw in a little bit of geometry. In this iteration we are going to calculate each intermediate x-y coordinate as a 5 unit increment along the direct path from the current location to the destination. To do this, we’ll just use right triangle properties to calculate the angle between the x-axis and the destination as Θ = tan-1(Δx/Δy) and the length of the hypotenuse as √(a2+b2). The x component of the distance will be the length of the segment (5 units) times cos(Θ). Similarly, the y component will be the length of the segment times sin(Θ). We’ll then take the floor of  the length of the direct path divided by the segment length. Each intermediate coordinate will be each integer from 1 to the calculated floor times the x component and times the y component. We’ll then publish each of these destinations as G Code formatted strings to the GcodeFeed to be read by the ESP Mock.

So, the question now is… does it work? All I can say right now is we’ll see. The concept should be sound, and as long as I haven’t mixed up any calculations when writing the code, it should work, or at least within the tolerances of the system. More to follow next week!

Multithreading and ROS

This week I’ve been digging into Robot Operating System (ROS) and how our project will make use of functional components of ROS. The eGreenhouse that we are working on will use ROS to coordinate the end effector movement and data collection, following instructions received from the user interface. The portion that I’ve been working on sends a series of instructions to the Grbl CNC controller. A second node monitors the position of the end effector and triggers data collection actions. Once the destination is reached, an image or set of environmental sensor readings are collected and the next destination is sent to the CNC controller. This process is relatively straight forward – go to a location, collect some data, go to the next point, and repeat. But it gets interesting in the context of the whole machine and the mechanisms that ROS provides to make it work.

ROS nodes are processes which run independently of one another and communicate through topics and services. As a quick 10 cent description, topics are unidirectional communication channels which nodes may either publish or subscribe to in a one to one, one to many, or many to many relationship. A topic is effectively a queue of messages sent from the publisher to the subscriber. Each node which subscribes to a given topic receives all messages published to that topic. Services are a bidirectional communication channel where individual requests and responses sent between nodes in a one to one relationship. Topics and services provide a convenient means for nodes to execute their respective series of functions or run a microcontroller asynchronous of one another but still pass data among them when any events of interest occur. It has been particularly fun to learn about the parallelism that is created through this organization. When a node receives a request through a service, it spins off a new thread to handle the request through the specified handler function. The sending node blocks until the response to the request is received. Each topic subscriber runs its own thread. When messages queue up on a topic, each is processed in the order they are received. Each message received on a topic is processed in the order that it was received, but if a node subscribes to multiple topics, they operate concurrently.

As I’ve been working on this project it has been thought provoking to design the interactions for each node around the implications of the built in thread structure. I’ve also incorporated some additional multithreading to handle executing each set of way points. This week I’ve been working on the node which receives requests from the front end, the PathPlanner node. The requests will contain a unique id of the program to run. Our database contains a table of waypoints, each assigned to a given program id. Each waypoint is associated with a location in the study plot and an instruction to collect either an image or a set of sensor readings needs to be collect. When the node is started, it spins off a new thread which watches for program ids in a queue. As program ids are removed from the queue, the watching thread of the PathPlanner node queries the database for the set of waypoints for that program and sends them to the CNC controller, one at a time so that data can be collected. Meanwhile, the main thread is watching the service for additional requests. This way, if the user wants to execute another program before the current one is complete, the id can be added to the queue and executed when the time comes. To make this work, this node manages two concurrently running threads, processes incoming service requests, and publishes instructions to an outbound topic.

A City on the Water

This week I had initially planned to write about the some ROS concepts or maybe expressing some interest in the new computers Apple announced this week. But today I had to make an early morning trip from Seattle to Whidbey Island on one of the ferries which I hadn’t taken in quite a few years and thought it might be nice to reflect on.

Seattle is located between Lake Washington and Puget Sound which together make a number of natural obstacles that the city has been built around. Puget Sound runs from about 25 miles south of Seattle, to about 40 miles north where it meets the Salish Sea and the Straight of Georgia around the San Juan Islands. The ferry system has become its own sort of tourist attraction. People from visiting from out of town often will make crossings for the novelty and similarly, locals often make weekend trips to the peninsulas and islands via ferry.

But the ferries aren’t just a recreational curiosity. Thanks to development throughout the region on the west side of the sound and the islands, Seattle has relied on the ferries keep the region connected. A significant number of people in the area rely on the system as part of their daily commute and contractors frequently find themselves able to provide services throughout the Puget Sound region on a scale that wouldn’t be possible without the ferries. When I was a consultant, I worked on ongoing projects in Port Orchard, Bremerton, and Whidbey Island which were all easily serviceable thanks to the ferries. West across the sound from Seattle is Bainbridge Island where many people commute downtown for work and similarly people commute from Whidbey Island to Mukilteo, often to work north of Seattle at Boeing or their various contractors.

The ferries are an integral part of life around the Sound, but along with many other industries have been struck with labor shortages. In recent weeks the various crossings have all had to cut service by 30 to 50% each day. Fortunately for my trip today this was only a minor inconvenience since I was traveling counter to most morning commuters. We all hope that service may be returned to full capacity before long, but until then, with a little extra planning, crossing the Sound is still one of my favorite spots to catch the sunrise in town.

A New Paradigm

This morning I’m on the run, so I’ve started my morning with a hot cup of drip coffee from one of my old favorite coffee shops. Here in Seattle, you’re pretty much always within shouting distance of a coffee shop and with so many options it can border on overwhelming where to buy coffee. I’ve been a fan of Seven coffee roasters for years since I was living in the area. Now they’re a little farther out of the way for me, but when I’m in the area it’s worth a stop. Today I’ve got a cup of their Honolulu Hawaiian blend. It’s bright, floral, and most important for today – it’s a quick stop along my morning errands.

Over the past week I’ve been getting started with Robot Operating System (ROS) concepts. In general, ROS programs consist of the core package which tracks nodes, topics, services, and manages the parameter server. The individual executable components of code in a ROS system are known as nodes, and run their respective sets of methods to operate the components of a system. ROS nodes can communicate with each other either through topics or services. Services are similar to the familiar request/response concept we are familiar with through HTTP, allowing individual nodes to communicate with one another.

Topics make use of the publisher/subscriber model. The pub/sub model is a many-to-many communication bus which allows nodes to publish data to topics and all nodes subscribed to a specific topic will receive that data rather than the one-to-one request/response model. Topics are defined by the data type they transfer and publishers must adhere to the data type for the give topic, this may be primitive types or C-style structs. Through this model, a program can run on a central controlling computer and each node is able to independently operate its respective service whether it be an internal operation or running an external micro controller. This allows for additional nodes which make use of common data to be subscribe to the same topic to receive the same events. By using this paradigm, a system can scale up the number of nodes or repurpose data in an asynchronously running program. I’ve previously been aware of this communication paradigm, but never worked with it before. I’m looking forward to continuing to learn about it given its suitability for robotics and IoT devices.

A Familiar Motivation

Ahh, the comforting rumble of the Moka pot starts my morning with a batch of Peet’s house blend. In case you’re unfamiliar with this classic stovetop “espresso” maker, here is an interesting video from Wikipedia showing how it works. Essentially, there is a lower water chamber, a filter funnel, and an upper coffee chamber. As the water is heated, the generated pressure displaces the water and forces it up through the filter funnel. The pressure-brewed coffee is then dispensed into the brew chamber. This brewing method creates coffee which is somewhere between a French Press and espresso in body and strength. I use quotes when describing it as stovetop espresso since it is a similar brew, but purists will quickly point out that it brews using lower pressure than true espresso for extraction. I’m drawn to this simple device over a true espresso machine not only due to the lower price and space necessary for a espresso machine, but also the nuance in using it. The Moka pot has a reputation for producing inconsistent brews which can turn out bitter and over-brewed if proper care isn’t taken. As for me, I enjoy the morning thermodynamics experiment as part of my coffee routine.

This week I thought I’d talk about some of the motivation of why I’m interested in the robotic greenhouse project. The robotic greenhouse shares several attributes with a project that I’ve thought about on numerous occasions, but haven’t been able to set aside enough time to work on. The idea is simple, collect environmental data in an orchidarium and display it through a web interface. A seemingly canonical Arduino project – collect data using inexpensive sensors, supplied in almost every starter kit, then to manipulate and display that data.

This project could be completed in any number of ways, ranging from simple data collection to creating a fully automated environment with a web interface. My initial ideas prior to starting CS 467 consider the issues of scalability and reliability in a small, home-based system. My use case consists of using wi-fi capable Arduinos to collect temperature, humidity, and light data in an orchidarium. This data would then be sent to a Node.js server where it is added to a database. This server would also host the front end which, at a minimum, will show current and historical environmental data in a static web page. The concept is that each Arduino would function as a data module which would collect data from any set of sensors. Multiple Arduinos could then be transmitting data to the server. This would allow scaling to additional sensors or even automating watering and lighting systems or collecting data from multiple locations within in a home since a user may have multiple such controlled environments and want to monitor environmental data from each.

To improve the reliability of this system, the sensor modules would need to be able to handle server down-time. The system that I’ve been conceptualizing is based in a single home. As such, outages are most likely to be power outages affecting the server as well as the sensor modules. Should a situation arise where the server goes down while the sensor modules are still functioning, they should be capable of logging data locally until connection to the server is restored. To solve this problem, each time a data point is collected, a request would be sent to the server in attempt to transfer this newly collected data. If the server is unavailable, the collected data would be stored on the Arduino’s memory. At each time interval, the Arduino continues to attempt a request to the server. As long as the request fails, the collected data is stored locally. When the connection succeeds, any data stored locally is sent, along with the data point collected from the current time interval. Once the stored data is successfully sent to the server, it can be deleted from local memory on the Arduino.

Although this system that I’ve been considering is quite simple, I am excited to contribute to a similar project. Naturally, more will follow in the coming weeks.

Casting Off

Hello, I’m Eric Sanchez and welcome to my blog, I’m glad you’ve found your way here. I’ll be writing weekly updates on my capstone project, past and future career experiences, and how I am staying caffeinated in the process. This week I’m happy to share a little bit about of how I’ve arrived to where I am now.

Where is that, you may ask? I’m currently located in Seattle, WA, the city I’ve called home since 2006. I originally moved here to attend the University of Washington where I studied geology. In 2012 I started working in environmental consulting. As a consultant, I traveled Western Washington and worked on petroleum remediation projects along the way. However, I also realized that this was not an industry where I wanted to spend my entire career.

A New Direction

A little over two years ago, a friend mentioned that he was studying computer science in the Oregon State University Online Post-Baccalaureate program and casually suggested that I look into it. At this point I was over a decade removed from the last time I had thought about computer programming. The curriculum and job prospects piqued my interest, but it would also be a leap to make such a drastic career change. I decided to test the waters a by first taking a series of Python tutorials then registering for CS 161 to see if I had enough interest and aptitude for the unfamiliar subject. After finding that I enjoyed learning the material I decided to continue with the program, initially taking one class per quarter, then two.

In January of 2020, I informed my previous employer that I was working on a new degree with the intent to change careers cut my work hours back to part time. When the reality of COVID-19 came to fruition in March 2020, I was able to continue working for a few months, but was ultimately laid off in August of 2020. Since then have focused full time on school and landed an internship with a large tech company during the summer of 2021. At this point, I have two classes remaining before graduation and starting work next March as a software development engineer on the team that I interned with last summer.

With all that said, this couldn’t qualify as a Code and Coffee blog post if I didn’t share what I’m currently enjoying. I tend to gravitate towards dark roast coffees towards the middle of the quality spectrum. I most frequently brew coffee in my Moka Pot which has been my go-to for nearly 10 years now. This week I’m finishing a bag of Peet’s Major Dickason’s Blend prepared in the trusty Moka Pot with a splash of half and half. This is one of my favorites since the flavors stand up to the Moka Pot well. Most important, it is satisfying when paired with working at the computer.