Oregon State University|blogs.oregonstate.edu
Blog Owner
Blogger Name

Category: Uncategorized

Denouement

  November 19th, 2021

Well, that was anti-climactic. The project is by no means done, but what I thought would be the hardest part and budgeted a whole week for turned out to be half a day for the actual logic and another to write effective tests. The store comparison function is running smoothly and came out to an easy to follow 167 lines of code; tests…

larry david meme | Tumblr | Larry david, Curb your enthusiasm, Larry david  meme

Upside is that the fussy bits for the comparison tests are being done as an update to the core NPM package and will let me sweep through all the other services and refactor to make sure that everything is actually using the same standards. As it sits prior to refactor, each part has individually hard coded mock responses for the other services and typos or API shifts might be hiding.

Tiny insects hidden among the leaves and trees prove they are no easy prey  | Daily Mail Online
There are at least two bugs in this picture

While I’ve been wrapping up the individual services, the database and its service have become available for integration. Remember those sweeping changes to standardize my tests? Also time to refactor out all the dummy logic and actually use the real database. I’m sure that will be a smooth process with no room for error or wasted time on stupid little errors.

The Science of Sarcasm? Yeah, Right | Science | Smithsonian Magazine

Even though the project isn’t quite done, this is the final blog post. As such, now is my chance to look back and provide a retrospective. The biggest takeaway: I’m glad from a learning perspective that I chose to do a microservice architecture but boy howdy did I pay for it every step of the way. Just the redundant busywork of setting up each service and making sure its configuration is correct; eventually I learned the signs of each possible mistake but only after puzzling for hours.

GENTLEMEN ITHAS BEEN MY PLEASURE Memegeneratornet Gentlemen It Has Been My  Pleasure - Violinista Del Titanic | Meme | Meme on ME.ME

Then there are the issues with GitHub. ‘Helpful’ articles laying out exactly how to configure a project to use private repositories which followed to the letter did nothing. Even when using a public repository, constantly needing to replace auto-generated package information so that it doesn’t try to access it as if it were private – which again refused to work under any situation. Upside to small, fractionated code is that I used only a percentage of my free system time; my front-end teammate used all of theirs up with a couple PR stages left to go.

The interwoven nature of microservices made testing a much larger portion than code logic; every test case needed one or more mocked responses to provide what would have come out of a call to another service. Fortunately this was an effective arm-twist to keep an up-to-date API document to refer back to when both writing routes and mocks for tests. Unfortunately as small changes piled up as needs were realized, the need for sweeping through everything as I mentioned earlier became clear.

Had I done a monolith, only the database would have needed to be mocked as all logic would have been internal to the program. I never would have needed to manage external dependencies as a single repository would have been consuming any code created. Only the routes needed by Gateway and Database would have been needed, saving on cross-reference between decoherent code.

That said…

No Ragrets Temporary Tattoo - Unicun
Read the post...


It’s All Downhill From Here

  November 5th, 2021

I actually mean that in a good way. Getting to this point has been assembling a foundation of solid code to save the hassle of fix literally a dozen places any time I need to change a single line of code. Now I’m finally there.

Early on I had made a custom NPM module and called it ‘CSSA-URL-Completer’. No points for guessing what it does. Now it’s grown and changed into ‘CSSA-Core’. Pretty obviously, it has the core code I expect to reuse time and again. URL-Completer got knocked over into being an exported class within core and anything that ends up being in more than one repository eventually ends up here instead. I still haven’t gotten a Private Access Token to work correctly with letting GitHub Actions access it for automated testing (or even tried with Heroku) but time’s arrow marches ever onward.

You are Here

What feels like twenty major refactors later, the Gateway and User services are something resembling complete and stable. With those set, I was able to move on and slam out routes and logic for two more services: Store and Review. This really only leaves the Shopping List and Item services for trivial services that just need to be fleshed out.

  • Reputation will need to be a bit cute with its logic as it will being going two-way with User
  • Price will need to account for sales
  • Tag is just going to be a cluster of filtering and spaghetti logic
  • Live Feed will require learning about streams and will probably involve buffers as well
  • Shopping Comparison is the big kahuna
  • Database is Keenon’s problem

So, yeah. We have two weeks left to finish the code and I have a small fraction of it done but I feel that I’m sitting fairly pretty. There’s two more trivial services, three that need to be thought through but don’t need any extra technologies, one that needs research but is relatively simple, and then the comparison. I can probably (hopefully) knock out everything but the comparison in a week and spend my remaining time banging my head against that probably NP problem.

Read the post...


Woah Oh!

  October 29th, 2021

We’re halfway there.

The Oh No Shop

In my infinite wisdom I figured we had more than enough time even with slippage factored in and that we’d be looking at stretch goals in a couple weeks. This, as you may imagine, is not the case. Environment setup took much more time than it should have and a host of other means of losing hours have piled up. As it sits I’ll be working through the weekend to get to where I really need to be and do the documentation for the midpoint checking.

This last week has had a lot of time consuming work that isn’t very interesting to blog about. The biggest accomplishment was building a proper API document for all of the microservice routes. I had a cliff notes version from our planning sessions, but with taking Cloud Development in parallel I finally saw what a real one is like and saw why it should be that way. Lots of time sunk, some good discussion with teammates to hammer out details, but not really any lessons to share beyond an example of the format.

On the code front, I’ve been wallowing through the mire of integration testing. My first pass had the dummy logic for returning hardcoded responses matching the expected format directly in the Gateway service. Obviously that won’t do long term, so it was time to rip out the logic into other services and get the redirect party started. Mocha/Chai unit testing went well. Too well. Despite me having already struggled through silent false positives last week, a new crop showed up with the redirect routes. As it turns out, I had been missing the ‘return’ in front of Promise-based tests I was doing. With that smoothed over, I am currently left with a heap of issues requiring more refactoring to get back on track.

The Oh No Shop

So yeah, I get to spend the weekend beating my head against basic integration testing and documenting what little I have so far.

Read the post...


Regret Regret Regret

  October 22nd, 2021
Prophet of Regret | Characters | Universe | Halo - Official Site

Regret is a strong word. I have regrets. Monoliths have their advantages, and I am rapidly and repeatedly finding them as I implement a microservice ecosystem for the first time. This week I set out to define the gateway API and dummy routes in the user service to experiment with multi-layer API calls. Then I sat down and realized that in order to talk to each other, the services each needed to know where the others were. I could hardcode URLs and rely on everything working perfectly and never changing; you already know why that’s a bad idea. I could have the gateway service pull double duty and have it’s address hardcoded into all other services; this would disobey a core principle of microservice philosophy and not actually save any trouble vs doing it properly.

Pooh Bear Quotes About Thinking. QuotesGram

So I now have a new coordinator service which will have its address provided via config files in all the other microservices and will be queried to retrieve the URLs for services of interest. I also got to learn how to create and use a custom Node module in order to keep to DRY principles; if every microservice is going to need to talk to the coordinator, then there needs to be a standardized block of code that I don’t need to worry about fixing everywhere any time it needs a fix.

Starting with the module, it turns out that the basic documentation is really all that was needed to actually do the module-specific pieces, although what they don’t tell you is that the ability to publish a private module requires a premium subscription. The module code was also pretty simple; private variables to store the needed URLs, setters for the coordinator URL and other service URLs, and two layers of getters for a service URL – if it is already stored and by making a query to the coordinator if it isn’t. Testing is where the real fun was. I finally got to knock out some Nock, and I had to pull on a new tutorial on top of what I’ve linked in the past. Setting up the dummy routes wasn’t actually so bad, but interfacing it with Chai and the request module proved to be a hassle. Request has been deprecated, so I needed to sift through a decade of now out of date information, the HTTPS just did not give me any compatible output, and so I settled on Axios as a module that I’ve used in the past. Even then, it took a couple hours of trial and error with the Chai documentation to get the async tests to validate correctly without timing out.

Then there was the matter of the coordinator service itself. Again, easy to code but a pain to figure out correct testing. There’s just the GETter and SETer routes and supporting model functions but I still don’t have a handle on Chai. Several timeouts, errors, and false test passes later I got to here:

describe('/POST service', function() {
it('should return a 201 status with self address attached', function(done) {
        chai.request(app)
            .post('/services/' + serviceName)
            .end((err, res) => {
                res.should.have.status(201);
                let address = res.body[serviceName];
                address.substr(0, address.indexOf(":")).should.be.an.ip;
                done();
            });
    });
});

The major lessons learned to make it work:

  • You can’t use an arrow function (() => {}) with Chai and keep things stable due to binding.
  • Without a done() an async function will time out
  • An address with a port number does not register as an IP address in Chai
  • Anything can be solved with judicious use of string manipulation
If it looks stupid but works.. it ain't stupid | Picture Quotes

Once there was a module and coordinator service to test with, it was time to make an experimental service to use the module and talk to the coordinator. Again with the testing woes. I ended up refactoring the module to export a class rather than a collection of functions, which also solved the issue of assuming globality – each router file can safely have its own copy of the URL completer without much overhead. Once that was done, I was able to pretty much copy over the (also refactored) tests from the module into my experimental service. As of now I haven’t actually done an integration test for the services, so it might all need to be fixed again once I try running them together.

Just when I thought I was done with my re-figured week, I got to learn a whole new thing: working with SSH keys in GitHub actions. I’ll let you know how that all works out next week, but so far it’s been more ‘trying things and having them not work’.

Culture Connoisseur: A Severe Case of Headdesk
Read the post...


Again and Again With the Again

  October 15th, 2021

We came into this week with our plan of action; my part was to set up the ten non-database microservices, complete with automated testing, permissions for the other team members, and branch protection to force testing prior to risking breaking changes. My stretch goal was to get some user API routes mocked up but research and the rote repetition of getting the repos set took up the week.

I already knew that I had ten microservices to set up, so my first step was to try and find as many ways to streamline this process as possible. Ideally I could set up a single repo and then copy it over and over, just needing to change the name before any determinate steps were taken. Unfortunately it turned out that while there is a lovely pathway for copying the code as I hoped, branch protections and user access had to be manually set up for each individual repo and those were going to be the major time sink anyway.

For anyone following along at home, the quick trick to clone a repo repeatedly is to check the ‘template repository’ box in the main settings screen, and then a big friendly green button will appear on the repo home page so that you may copy it to your heart’s content.

There’s a checkbox under the repository name box when you first open settings.
This will be in the upper right where ‘code’ usually is.

Since I sort of started at the ending there, I’ll continue working back to my start. Adding my teammates to each repo was easy but time consuming, as there is no way I could find to automate adding them. Theoretically we could have made an organization which would own all the project repositories, but we would then need to pay for an account as we are using pro features with our student accounts. I also had to check off my desired branch protection rules for each and every repository, which was the largest and most boring time sink in this whole thing. I also needed to make sure to put them in place AFTER I was done fixing lines from the template to the actual repo name so that I didn’t have to annoy my teammates with all the minor changes.

The ‘Add Rule’ button will be in line with the ‘Branch protection rules’ header regardless of if you already have any rules in place or not.
You can get very fancy with branch name patterns, but for my case targeting only ‘main’ is enough.

Since I only had the barest skeleton of code, I only needed to fix the README and a few lines on the Node package files to point at the specific repository instead of the template repo. Monotonous, but not as bad as needing to fix a bad assumption I made when copying all the repos: I elected to copy the main and dev branches I had set up, assuming settings and history would transfer. No. I had to manually set main as the upstream for each and every dev and force a push to overwrite the non-history with the new configuration. Again, not bad except for having to do it ten times; especially with messing up orders of operation and needing to wait for a teammate to approve a fix.

Monotony aside, the larger and more satisfying part of this whole experience was researching and actually configuring the template project. I started with the gateway, as it’s the only outside facing part of the microservice ecosystem and if nothing else exists it can still feature dummy API calls to pretend there’s more of a backend behind it. Not that I actually put in any of those dummy routes yet. I found in one of the tutorials I followed an excellent placeholder route that let me run a generic test to ensure my testing suite was actually running correctly, an about page pulling the repo name and version from the package JSON. From there, I was able to set up Mocha and Chai to compare the data in the about page to the package JSON and prove both that the data was being passed successfully and that the test was testing things. I also have Nock installed and ready to intercept calls to other APIs, but that’s a problem for future Robert for now.

I learned a lot this week and had a large share of frustrations, but overall I’m happy with what I got done and am looking forward to actually getting some routes going next week.

Read the post...


And So It Begins

  October 8th, 2021

Group made, project chosen, team standards made, time to get going. For those just joining us, we are making an app that is supposed to be ‘the Wayze for shopping’ in which users will generate price data for groceries and then use that data to find the best deals for their shopping lists. As with all things, much easier to summarize than to actually figure out the details for. The broke up into frontend wireframes, backend microservice ecosystem, and database structure to create the diagrams and design documentation; I took backend. Behold:

Worry not, I can explain. Directional arrows show what will be calling what. The titled boxes are all microservices. At the top is the frontend app which is only aware of the gateway service. The gateway service provides an API which provides access to and obfuscates details for all of the other microservices. The database microservice provides an API so that no other microservice needs to know anything about how data is stored, just that their data of interest can be accessed via the API. That whole mess between them will take a bit more.

Let me explain... No, there is too much. Let me sum up.
  • User Management Service
    • Provides CRUD for user entities on the database
    • Login authentication
    • Updates include lists and reputation
    • Stretch goal is to make a full user profile
  • Item Management Service
    • Provides CRUD for item entities on the database
    • Stretch goal is to be able to provide substitution suggestions
  • Price Management Service
    • Provides CRUD for price entities on the database
    • Stretch goal is to ask users to verify prices still stand, part of reputation upgrade
  • Tag Management Service
    • Provides CRUD for tag entities on the database
    • Stretch goal is to have tags apply to stores as well as to items
    • Stretch goal is to be able to vote on tag applicability
  • Store Management Service
    • Provides CRUD for store entities on the database
    • Also used by the shopping comparison to get store inventory
  • Review Management Service
    • Provides CRUD for review entities on the database
    • Stretch goal is to add voting on review helpfulness
  • Reputation Management Service
    • Provides tools to alter user reputation
    • Extremely barebones for MVP, only increments reputation for activity
    • Stretch goals make it more of an ‘event’ system, allowing penalties and revocation on bad actors
  • Shopping List Management Service
    • Provides CRUD for shopping list entities on the database
    • Stretch goal is to allow a shopping list to be shared between users
    • Stretch goal is to be able to save a shopping list as recurring
  • Shopping Comparison Service
    • The real meat of the application, this one only has a single API call exposed which causes a whole heap of trouble
    • Takes in a shopping list and limits on what stores to search
    • Accesses all stores which are within limits, checking for items on shopping list
    • Returns array of stores with all items that are on the list, sorted by number of item matches and then by savings
  • Live Feed Service
    • Checks database for updates to all the tables to apply to a subscribable stream

So, there’s a pile of services which provide CRUD to a single table in the database with a couple of extra functions related to their relationships and function, the live feed service that provides a stream based on updates to all the tables, and the pièce de résistance – transforming all that delicious data into recommendations of where to shop to save money. The MVP is actually pretty simple other than the comparison, so I’m hoping we can tear through at least the low-hanging fruit stretch goals. We’ll see how well time bloat is avoided.

Read the post...


Introduction and Salutations

  September 29th, 2021

I’ve been playing with computers for as long as I can remember, starting with a cast-off Macintosh in my dad’s office. Way too much of that time was spent on videogames, and that led to me looking into mods and making my own when I was a teen. That led to my first pass at computer science; unfortunately I was a classic ‘gifted’ case and fell pretty hard on my face when reaching college level classes. I switched over to food science and got my act together to actually graduate.

I ended up landing in brewing, which turns out to be much like the videogame industry: long hours, hard work, and pay below scale for the skill set. I stuck with it because I did enjoy it and was in a pretty special position. I was the only distiller for a mid-size brewery that had plans to expand the distillery into a full department, placing me in management once that happened. Once my wife and I started planning for children, I gave up on that promotion ever happening and got started on this program.

I’m not entirely sure what I want to do beyond ‘work’ once I graduate. I’ve specialized into app development with Mobile, Networks, and Cloud as my electives as I have some personal project ideas and they are conducive to making a strong public-facing portfolio I hope will help in my job hunt. However, I have a soft spot for VR and my ‘white whale’ that is far beyond anything I think I can ever manage is something akin to a virtual historical village smashed together with Wikipedia; being able to examine details in a scene and pull up deeper information or roll time forward and back to see changes through history.

My team chose the Crowd-Sourced Shopping Project for practicality and interest, it’s a good idea for an app we think is worthwhile. Runners up were Farm Match and Regeneration Central, both for the same reasons: while being very good ideas I would love to see exist the real-world proposer opened up more logistical complexity than we wanted to manage for a class project. I want to help technology help people, and apps to connect businesses with employees and support resources are right up that alley.

Outside of school, I am currently hacking together a home automation system. I’m stubbornly refusing to use any of the main brand products, not for privacy concerns but the required connectivity. I prefer to have local control with remote backup, as opposed to how Alexa and Google Assistant work with remote servers controlling core logic. They aren’t likely to go anywhere and connectivity issues aren’t common but it irks me. I’m also working on a portfolio website to link from my resume, building the same design in multiple architectures.

I’m looking forward to taking this course with y’all, plan to read some other blogs and hopefully someone wanders into here to follow along with me.

Read the post...