Integration is relationship management

Handshake
Photo by Charles Deluvio on Unsplash

This week, my project team made significant headway towards integrating our disparate apps or services into the main website. And for these interfaces, where separate pieces meet, we had to suss out a lot of complex logic and nuances. For myself, my two main goals were to provide a smooth user experience and to prevent errors that “broke” the site.

One of the trickier aspects of figuring out the user experience for these integrations was defining the features that we wanted to implement. While we had the initial user flows and mock-ups from the project’s outset, the devil is in the details, and we had to figure out how much we wanted to do vs. how little we could get away with (for a minimum viable product).

There is, apparently, a lot of gray in between the minimum we can do and all the fancy bells and whistles we each could imagine for any given user flow. For example, for buying a plant, we could support a “running cart” with the possibility of choosing any combination of plants from the marketplace, regardless of who the seller is, and regardless of the plant handling: whether the plant is for shipping only, pickup, or both. A la Ebay. Or, we could change the structure of the website so that we would be able to filter plants by the seller and support a running cart that only allows plants from one seller at one time, like DoorDash (this option was unanimously voted down). Or, we could just have a quick checkout for each plant, but allow the customer to buy up to the maximum quantity of the plant being offered.

These were the types of discussions that came up this week as we examined the pros and cons of each approach as well as how we each envisioned a specific flow to look like. Ultimately, the guiding principle that we applied was to scope according to resource availability: the time we had left to work on these features (a “sprint”), the estimation of how much time it would take (after working a few weeks with Django, we are getting a little better at estimating how long it takes to implement certain functionalities), and the level effort it would take to implement a feature a certain way (would it be worth it, or would it take a lot of effort that yielded little gain).

The spoken refrains for this past week were “if there were more time”, and “in the real world”, and “for this project”. I think these are all important aspects to think about: the available resources, the intended usage, and the context of the finished product. These are some of the most rewarding conversations I’ve had, working on a technical group project. I appreciate my group members’ thoughtfulness, the strategic mindset, willingness to help each other, and the bias towards action rather than prolonged rumination.

Integrations, especially, come with a lot of hidden time sinks around every corner, and the time we need to devote to testing, code review, and debugging is growing with each new pull request. As we realize this, as a team, we are really doing well with paring down our list of features, adding those we can’t do to our nice-to-haves/stretch goals/backlog, and scoping just enough to get us to the next state of “completeness”.

We also make it a point to not skimp on the UX. Just because a feature is not as “rich” as it could be, without the bells and whistles, doesn’t mean that the user experience needs to suffer. We strive to tackle any “clunkiness” (this word came up a lot this week) so that the UX is smooth.

Whether it is the user flows for these feature integrations, or the conversations had for the design decisions, I think that integrating standalone services together is an exercise in relationship management. It is rewarding and challenging to consider what each team member wants and how much we can each give towards the end goals. Integration is also about the relationship the end user will have with the user interface: how they will navigate through the site and what they might expect, whether we have met those expectations, or if we can help guide those expectations through design.

My final thought might be a stretch for some, but I’ve nevertheless waxed philosophical on integrations: Once we are not alone, there is a relationship; and with that relationship, there is dialogue as well as give and take. It’s been really eye-opening how this can be true on so many levels, whether it is for integrating software, or for working together in a group project.

Merge Conflicts Are My New Mondays

Mondays – Who likes them?
Photo by Annie Spratt on Unsplash

We’ve come to the midway point in our capstone class and, while it feels like we’ve accomplished so much, there is still so much to get done! And the newest challenge, as we start integrating small bits of code with each other, is the dreaded merge conflict.

I’ve heard tell of this beast or pest (have your pick), but, because most of my work so far has been as a team of 1, have never come face-to-face with it. Until now.

The beauty of our buy, sell, trade website is the buying and selling and trading. And for these flows to work, entities and different services need to integrate with each other. Resources are shared, there are multiple touch points in the UI, for many entities. There are clicks and loads and writes and (sometimes) errors. A merge conflict appears when the space that our code occupies overlaps with another’s code space. And the version control system cannot have two different pieces of code in one space. The first time I ran into a merge conflict, it was novel. The second time and every other time thereafter, it was undesirable.

So I talked to some folks, at home, at work, about my newbie merge conflict experiences. Mostly, I questioned them about how teams working on more complex projects deal with merge conflicts. The answers all aligned with a common theme: unavoidable, but do your best to avoid them. I… was a bit confused.

So when I asked for elaboration, these seasoned devs helped me re-orient my thinking around best practices that lessen the chance of merge conflicts:

  • Pull down changes often. The general recommendation is at least once a day. This is new for me, as I’m not used to thinking about the work others are doing. Out of sight, out of mind, is the opposite of what I should be thinking. After fetching, merge or rebase (the differences are discussed here).
  • Commit often, push up changes often, and make pull requests as often as possible. Incremental changes seem to be key. And this makes a lot of sense because the quicker small changes are added in, the less likely for a merge conflict. Of course, the previous rule still applies, that the incremental changes are also being pulled down often.
  • Review and approve PR’s often. This step is important to allow the previous bullet to succeed. We facilitate each other’s merges to keep things moving. This really is the opposite of out of sight, out of mind. Instead, this methodology is keeping everything in sight and in mind.

I really hope these tips will help lessen the number of merge conflicts during the latter half of this course, as we really get into the work of integrating disparate pieces. Having just started with integrations, it seems like the logic is much more complicated, and it is challenging to get the flows working as they should. It’s my goal to minimize the chance for merge conflicts so that I can devote as much of my focus as possible to these complicated flows!

Writing for the Reader: an API Gone Sideways

Where to look?
Photo by Julia Joppien on Unsplash

This week, my project team and I are working hard on getting the first few pieces of our project up and functional. Even though we are still a few weeks away from a working Minimum Viable Product, we are making good progress, chugging along with our pull requests and merges. It’s a really great feeling, seeing things come to life.

One of the most satisfying aspects of working together is that our code reviews are going pretty smoothly. My teammates’ code is very easy to read through and digest. We are sticking to our style guidelines and, by doing so, even though a teammate’s code can be quite different in approach than mine, it is easily consumable and sensible. This, in turn, makes for efficient and productive reviews.

In contrast, I’m facing a different sort of code quality scenario at work, while trying to read through some very open-ended API documentation. Unlike the efficient and productive teamwork that I am experiencing with my school project, at my current work project, I am needlessly spending time trying to make sense of what the writers of the documentation are trying to say and how to set my expectations with these vague specifications.

In all fairness, though I am not at all an expert on API best practices, I can see that the team behind this documentation tried to follow good API writing standards such as: trying to write with self-describing naming for their JSON schemas, trying to have their data model mirror the business or domain logic, and not mixing data types for any given request or response property.

I can see that it all started out with sound design. But, somewhere along the way, things started getting hairy. As a consumer of this API I’ve identified some key (anti-)patterns that make it difficult for me to use the product:

  • Optional properties or, as I call them, GHOST fields: There are a LOT of optional properties in the response JSON that may or may not exist in any HTTP response. The documentation does not specify when I am to expect a certain property, so I have no context to help me anticipate the receipt or omission of some response JSON properties. I am told that these properties were intentionally kept optional to make it more flexible for the app developers to address different, or changing, scenarios. Also, the goal of the optional fields is to make the codebase extensible. Well, there is a fine line between flexibility and vagueness that, once crossed, leads to confused developer end users who can’t depend on the API. My recommendation: For people who use the API, even with the added bulk of sending more information to send across the wire, having required properties with null objects or empty strings as values is much more preferable to vanishing or sometimes-present properties.
  • Repeated data in the deep, dark nested parts of the JSON: Some of the same properties and values that are available in the outer, more accessible layers of the JSON sometimes reappear 10 levels deep, somewhere. I can only guess that this repeated data is useful in its immediate context, 10 levels deep, so it was copied there. But this really smells off to me. It makes me think that the JSON schema is incorrect. My recommendation: If the two separate areas in the schema share a common set of properties, then maybe the JSON isn’t capturing that relationship correctly.
  • Non-unique property names, that are eventually changed: So this one is related to the previous bullet. There are repeated property names in different scopes because there was initially a “need” for that setup. But now the API devs have decided that they want to make the names unique because consumers of the API are “grepping” the wrong property. So they send out a notification alerting people of the name changes. The folks using the API in production are hopping mad (or, at least, put out). My recommendation: while having unique names seems like the right answer, the real problem is the one mentioned in the previous bullet: the relationship is not correctly constructed. Maybe there is a better way to structure the schema so that there are no repeating properties. Maybe the solution lies in representing only the differences between those two similar sections of the JSON (literally, the diffs).
  • The documentation doesn’t note expected values when it should: For many properties, there is a finite, known list of values that can be expected. This API documentation is missing many of these expected values. If I see a new value being returned, I immediately question it: what does it mean? is it really new? is this value returned in error? My recommendation: be explicit in the documentation about what values people can expect for a given property. Being explicit always trumps inference, at least in technical documentation.

So, in writing these thoughts out, I have to curb my censure and say that I believe that this particular team behind the API is really trying their best. And, like I said, there is evidence of solid methodologies that speak of good beginnings. But, as it often goes with scaling out a product, with complexity comes the opportunity for bad design choices to seep in. The motivations were good: flexibility, extensibility, accounting for detailed real-life data models… but the danger lies in losing too much structure, leading to end-user confusion and discontent.

My final thought is self-reflective: whether at work or with my current school project, it is easy to write code that I, as the author, can understand, but it is much more difficult to keep the reader of my code in mind (or the consumer of my endpoint, say, as the case may be). Putting myself in the recipient’s shoes makes it easier to see how my design decisions or coding styles might affect them, a practice that I am should do more often.

Finding Small Joys In My Work

Erigeron – Fleabane growing along my driveway

Sometimes, little things make all the difference. And, in the busy-ness of this past week, the small joys kept me trudging along.

Looking back, I can see why, at one point during the weekend, I fell really behind on all my tasks. Work at my job was piling up (scope creep!), as well as schoolwork and family matters. All normal circumstances, really. But then, in the midst of all these tasks, I made the bad decision to focus on a task that, in the big picture, contributes little to my overall progress. It was a decision made under duress and one where I didn’t take that very important moment to stop and think, and prioritize accordingly.

The rest of the weekend and beginning of the week were spent hustling my way through my work, while trying to maintain a moderately-high level of quality. I made good headway, but felt pretty burnt out through it all.

Thankfully, there were moments of little joys that, spread out through the haze of constant work, kept me from completely face-planting into my soup during dinner. These moments energized me and kept me going, like little endorphin kicks. And I think these same types of experiences will help me out in the future:

  • The Little Win: These are moments that were little achievements or milestones for me, such as finishing a homework and having the automatic grader pass all my submissions. At work, one example is when I present what I’ve been working on to stakeholders and receive positive responses. At home, little wins are when my kids and I can have some time everyday to connect, snuggle, and (somedays) eat a home cooked meal.
  • The I’m-part-of-the-club feeling: As a computer science student working on different projects across different areas, I very often have a bad case of imposter syndrome. This is especially true when I haven’t done the same run-of-the-mill tasks that more experienced software engineers regularly do, or am unfamiliar with a tool, or don’t know about a design pattern… When I finally do a programmer-y-type thing for the first time, it makes me feel like I’m part of the (software developer?) club. This week, this little joy came when I finally made my first pull request as a part of a dev team. It felt great!
  • Connecting with the why of my work: So often, I get lost in the actual tasks themselves that I forget why I was drawn to a project in the first place. Why did I want to do this particular senior capstone project? I love houseplants! This past week, in the midst of some backend work, I started thinking about creating mock user accounts with plants for sale. I thought it’d be nice to use pictures of all the indoor and outdoor houseplants that I could find, either in my home or in my neighborhood. While I only have a few pictures so far, just taking the time to appreciate different plants, and pause and have fun taking pictures, reconnected me back to the why of this project instead of the what. My neighbors have some pretty nice outdoor succulents and cacti planted. I’m going to take some surreptitious pictures! Here’re some of my houseplants:
Here’s my ZZ plant. Apparently, it’s super poisonous.
Here’s my Aloe Vera. It recently got re-potted into nice new digs! In the background is a Venus Fly Trap.
I don’t know what kind of plant this one is. It’s new from Trader Joe’s. I think I’ll be getting some more from TJ’s!

While I am lucky enough to be able to choose projects that are interesting to me, it’s so easy to get lost in the weeds of the work. Taking time to appreciate all the small wins, new experiences, and small aspects of the work that I enjoy, makes the work worthwhile and meaningful!

Combating Context Switching Drain with Repetitive Tasks + Focus Time

Hand holding a miniature old-fashioned alarm clock
Photo by Towfiqu barbhuiya on Unsplash

This week we are start development! As the first step, my teammates and I are setting up our dev environments. Then, we commence digging into our tasks for the week. With the project plan as our guide, our team is focusing on the following different tasks: one teammate is setting up the Git repo with instructions on how to set up the local Django instances (so that we are all on the same page, which is so very important), then starting work on writing some model classes and routes; another teammate is working on front-end tasks, namely, building the Home page and the About page with some bootstrap and CSS customizations; I will be focusing on getting data to seed the database, as well as also working on some model classes and routes.

The slight difference that sets this project apart from other CRUD websites that I’ve built, or helped build, in the past is that we could not find a good API to use for the website’s data (our website is a Buy, Sell, Trade Hub for House Plants so we were looking specifically for a house plant information API). And so, going the scrappy route, I am scraping the information from a couple of good house plant websites that we had identified as good house plant references. Firstly, I am compiling various house plant information into a spreadsheet, which will then be reviewed, and verified (should there be any questionable information), and supplemented with any missing data via more research. Then, from this house plant mini-compendium, I will create a short description for each plant as well as care instructions, citing sources as appropriate. The plant data will be popped into a data-seed csv file. I’m not writing any scripts to scrape the websites, just plain old copy and paste. It all feels so very early 2000’s, when I was working as a temp here and there while in college (the first time around).

Having done these manual data-grabs a few times in the past, I figured that, with some experience in marketing and content editing, this task is old-hat. Grabbing some information, consuming it, and distilling it. It shouldn’t take more than a couple hours…

And here I sit, only halfway through my scraping, yet already past the couple hour mark, and quickly (or is it slowly?) approaching three. I can see why this is a task that is automated so very often. And even though this repetitive, almost mindless, task can be thought of as boring, I find myself oddly lulled into a relaxing mind space, almost like when I’m in the “zone”, as I sometimes feel I am while playing a musical instrument, or when I’m feeling like I’m at the height of productivity at work.

Perhaps I feel this sense of being in the zone all the more acutely because most of my days are spent context switching from one task to another. When I am context switching, as expected, my productivity goes down. Down, down, down, because I am only ever able to make shallow inroads of progress before dropping the current task and shifting my attention to another, most likely unrelated, task. When I revisit a task that has been interrupted, it takes me awhile to get back to the point at which I had left. “Now, where was I?” is apt.

Funny enough, at least to me, a Computer Science major, the phrase “context switching” has its roots in the CPU context switch, which is when the CPU switches from one process to another. And since context switching is computationally expensive, there is the concept of the cost associated with the operation, known as the context switch cost. This cost also exists when people switch tasks! And it can lead not only to decreased productivity, but, I find, an increasingly fragmented mind with loss of focus. I’d say that on days where I have to context switch many times, or successively in a short time frame, I find my mental state to be frazzled, grumpy, and disoriented.

Unfortunately, I’ve, of late, been jumping from task to task at work as well as at at school and at home with the kids (lots going on, suddenly). And, sometimes, with the Yahtzee-scramble of all three life-domains superimposed on one another, I need a time-out moment for myself.

This data scraping might be one cure to all this context switching madness.

I think that the web scraping task not only lets me be in a setting of controlled productivity, but it also lets me focus on a simple task for a long-ish duration of time. I’ve been in this place before, this “focus time”, where I don’t allow for disturbances and make significant headway into my work. This combination of simple, repetitive task + focus time has manifested in my life as: any housework that involves scrubbing, most sorts of crafts, and other seemingly trivial tasks. Most importantly, these activities help to “reset” my mental state.

Yesterday, on my company’s Slack, I noticed that my coworker who I sometime message (or “slack”, used as a verb) had some little sleepy Z’s in her status circle. Since I am not that familiar with using Slack, I didn’t know what those Z’s meant, until I discovered that they meant that the user has turned on a “Do not disturb” mode and the Z’s indicate that the notifications for the user will be paused. While most other folks I interact with on Slack do not use this feature, I thought it was very impressive that my colleague set aside some real chunk of her schedule to give her complete attention to a task. It was also impressive that she chose to share that status with other folks by letting them know she won’t look at their slacks until she is ready to.

These little Z’s reminded me of a practice at my previous job, where it was company policy that each team block some focus time on their shared schedule so that they can, individually, or as a group, have periods of increased productivity (with the hope of better quality and quantity of work products). It was a very effective strategy and one that I had forgotten about, in this current uber-connected work culture.

I feel that there might be a good recipe here to combat the mental drain caused by context switching: combine the calming effect of working on repetitive tasks and the importance of using focus time to deliver quality work. Perhaps scheduling short focus times throughout the day can help increase my productivity. Maybe block off an hour or so every day on my calendar and use the “Do Not Disturb” mode on Slack? Similarly, if part of that focus time was spent on repetitive tasks (say creating tests, or documentation), I might be able to reset my mental state, or at least decrease any agitation. It’s worth trying out!

For the rest of the week, my goal to apply both techniques, repetitive tasks and scheduled focus time, and apply them together, when possible, to see if I can get more done with the same amount of time. (In any case, I still have to get the data scraped, so I guess I will have ample time to test this method!)

Django’s ORM: A Shiny, New-to-me Tech That I Want To Try

so shiny and attractive (Photo by Joshua Sortino )

This week’s task is to work on a project plan document, which is a very thorough breakdown of the project from all angles (architecture, tools, UI/UX, timeline, task assignments). There’s a lot to consider when putting these commitments down on paper, and one of the major considerations for our website is the tech stack that we will be using. Python is a common-ground language across the team, so we decided to build the site using Django, a popular Python web framework that offers a customizable full stack solution. Django is a new for all of us, and it will be a nice change of pace to learn something together, compared to the many times I’ve learned alone. Of the many appealing aspects of Django, one of the features that made it a top web framework contender is it’s default object-relational mapping layer or ORM for short.

My newbie understanding of ORM is that it is a programming technique for converting data between different types: data stored in a database’s tables and data represented as objects in the application code. Because ORM is a layer of abstraction on top of the database calls, the application developer can interact with the database in the application’s programming language (i.e. Python). If I want to change the first name of a person, there is no need to write SQL and pass it through to the database. I can just do that in the application code. i.e. If I have a person, “p”:

p = Person(first_name='Jane', last_name='Smith')
p.save()

then I can change the last name like so:

p.last_name='Steele'

“Neat!” “And confusing.” “Is there even a database?” were some of the reactions during our team discussion. I take the fact that the last question even surfaced in our conversation as an indication of how much an ORM can really abstract away the concept of a database. And yes, there is most definitely a database. It is just that we can (hopefully) do less in the database and do more in the application code.

But, is this really a good thing? Good meaning seamless? And bug-less?

Being a cautious person with a questioning mind, I’m both excited to use Python to interact with the database and I’m wary that everything will be “all right back there”, in the DB. How can we be sure that the objects are truly converted properly to rows in the database? Will the related tables update automatically, if a change were made to an object? How can we check that all the constraints we want are in place? How does the ORM account for any subtle differences between the Object Oriented Programming paradigm and the relational model (assuming that a relational database is being used)? These are the initial questions that I have in my mind.

Another concern is with moving what seems to be logic that should live in the database into the application code. How will that affect performance (initially and when we have ten million users after growing 6x)? Or is that the least of my concerns? I have a lot of concerns.

Still, the prospect of writing zero (or at least less) SQL and writing Python, instead, is so very novel, with a huge amount attraction, that I feel it needs exploring. I want to see what the hiccups are, if any, once we get rolling, and how much of a time saver (or maybe a time sink) using Django’s ORM is. I want to see how closely the tables that Django creates matches the ERD that will be submitted in the project plan. I want to try it out and see it in action. I have a lot of wants.

So, with many concerns and many wants, I am excited to start building out our site with Django’s ORM, the current shiny new tech on my radar!

The Napkin Spec: A Big Idea in a Bite-Sized Communication

Often, at the beginning of a new project, there are big decisions to be made in a short amount of time. Navigating through these time-strapped, decision-making processes can be stressful and, I have found, it can be difficult to stay focused under pressure. In this post, I will share how focusing on creating a “napkin spec” helped me stay on track as my team and I decided on our Online Capstone project idea.

To give some context, this week, at the start of the Online Capstone class, our team had only a couple days to choose the project idea(s) that we would work on for the entire term. With an attitude of “No, stay focused this time!”, I decided to adopt a mental model to help me stay focused and productive while we discussed our options. My personal solution was to frame my choices around creating a deliverable for our project idea, namely, a “napkin spec”.

I recently saw the term napkin spec while I was browsing through the interwebs. And while I don’t recall where I saw the term, I do remember the immediate appeal of an early-project specifications document… written on a napkin! The napkin spec is the blend of a product requirements document and a napkin pitch. There is just enough room to fit information about the core features of the product. There is also just enough room at the napkin’s edge (that ridged, bumpy part, if we’re talking cocktail napkins) for the product pitch. There is no room for hesitations, and what ifs, and any other sort of padding that increases the scope of the project beyond it’s nascent stage. The specific format I was going for was a concise bullet point list of features for our project’s product.

The napkin spec was the perfect litmus test to all my choices; a great way to narrow down all the different big ideas into a short-list of ones that fit my and my teammates’ interests and constraints. If one of my suggestions didn’t bring us closer to creating a napkin spec, then I threw that idea out. This worked well and eased the mental load of sifting through the numerous projects we were all interested in, to just a couple that fit the bill, for all of us.

The beauty of the napkin spec is that once we narrowed down the list, and decided that a new project proposal was our top choice, we were able to create the napkin spec to send over to our professor for review and approval. So, as an aside, we did add a couple stretch-goal features and included a motivations section, but I think it was OK for this context.

By focusing on the deliverable, specifically a pared-down, concise summary of a big idea, my team and I were able to go from a nebulous, multi-directional discussion to communicating in a focused and compact format. Win!

I look forward to using the napkin spec strategy for new projects in the future!