Today I am going to take a departure from picture based storytelling to focus upon something that has been close to my mind recently. This would happen to be authentication and authorization in distributed environments.
As a little backstory, for the OSU capstone, I am working on moving a Desktop application towards the cloud. It’s only been 10 odd days or so since we’ve gained access to the code base, but a lot of progress has been made when it initially appeared rather hopeless.
With some of the pieces starting to come together, many of the more nuanced problems begin to take focus.
First, our user facing software is currently going to be an SPA authenticated on the backend through an Oauth provider. The current setup of the project requires a file storage mechanism that the use can upload and read from.
My initial concept was to build out a separate REST API that would connect to a third party file storage service that the user could communicate with, but this led to some shortcomings.
- Making the REST API publicly available to the SPA leads to some trust issues. Even though we authenticated the user from the web-client’s point of view, how can we be sure that the user is who they are without some sort of session mechanism and further then that, how can we source identifying information for authorizing later retrievals?
- Alternatively, the REST API could be private and accessed through the web client’s backend. The unfortunate downside to this is that the files passed to the storage provider (S3) would need to go through an extra server on the way there. Although I haven’t verified this, it is likely that the large file transfer’s RTT and network congestion would take a hit in this scenario.
- The method that I went with was a bit boring, but got rid of both the previous problems. That is, serving the SPA and running the S3 service from the same backend web client.
Meanwhile, another thought had came to mind. Could we solve our trust issues by adding in another service? We know that we don’t trust the SPA, but we do know that we can trust our web client’s backend. Could implementing an expiring token store between the file storage service and web client could give us a little peace of mind?
This is how it would work. Instead of the user going directly to the REST service. They would first send a request to the web client backend asking for one time access to the file storage service.
The backend has seen them before as they have been authenticated through a third party Oauth provider and they bear some form of JWT. With this in mind the web client backend generates a random state token.
Prior to handing the token to the client, it is submitted alongside user authorizing information (unique part of JWT) to the expiring token store service.
The user receives their temporary state token and then requests file transfer through the REST server.
Prior to granting approval, the REST server queries the expiring token store for a matching state token. If one is found, the file transfer approved and the file is streamed to the storage provider with the JWT identifying information and filename as a key.
There may be shortcomings to this approach (perhaps you could share some), but it has been on my mind recently.
To wrap things up, separating out services can be helpful in terms of making components easy to understand and making systems more transparent. There is however a balance that must be struck in terms of ensuring that component separation does not lead a poor compute to communicate ratio or security issues.
Authorization concerns have the potential to multiply if the wrong design choices are made and we must strive to reduce the attack surface of our applications by limiting both public interfaces and possible system states. Only by reducing the complexity of our applications and their interconnections can we hope to produce things with fewer flaws.
If you made it this far, I applaud your patronage. Until next week!