The project is now well underway! Yay! Travis has been working hard on the backend/image recognition/REST api, and I have put my nose to the grindstone with the frontend and UI. We’ve been staying in touch by meeting on Zoom twice a week, and it has really been nice to make sure we’re on the same page. This Monday, we met to discuss Travis’s REST endpoints and how I could use them to hook up the frontend to the backend. He gave me a little demo on Postman, which helped a lot.
I think the UI is really coming together. I started out with a very simple barebones Home screen and Results screen, and I have spent every chance I get to style it here and there. When the Home and Results screens started, they were simply a few lines of text. Now, here’s the work-in-progress screens:
The colors and placement of things are likely to change, but a lot of the styling/model legwork is there. Getting the image into a circular frame was not as easy as it might seem, but luckily it is a pretty common task.
What has really helped me during this development phase is having so much of it planned out before diving into the code. The project management software we’re using is Jira, which helps organize our entire backlog of tasks into sprints. It has helped me stay on task and know what I’m supposed to be doing. I tend to tunnel-focus on one task, but using the project management software really breaks me out of that. It is a constant reminder that I STILL ENED TO RESEARCH NFC.
As far as the code management goes, I have been committing changes regularly on the days that I work on the app. At the end of this sprint (this weekend), I will merge my changes to the main branch and have Travis do a code review to get his thoughts on the readability and efficiency of it.
Fingers crossed, but the future looks bright for this project. The only thing that might be a significant roadblock is if implementing NFC requires a large learning curve My limited research suggests it will not, but you never know what might come up with new things.
Travis has done a splendid job on his side of things, and the backend is really smart. He is using an AWS ML model to classify the images sent to it, which is going to be far more robust than anything we can make from scratch. Because of this saved time, we will be able to improve the app further.
The frontend is mostly hooked up to the backend now, and we can hopefully get to some stretch goals, like implementing QR codes and barcode scanning, after tightening up the rest of the code. But for now, I have to get back to styling and handling edge cases in the app!
Thank you for reading. Eat your veggies and get some good sleep.
Leave a Reply