As classes have begun to settle into their course content in terms of introduction and preparing student’s for remote teaching I have been reminded of the pacing of school that can be incredibly rough. My canvas calendar appears to grow with due dates as I previewed what is to come. Nonetheless, the work must continue!
This week I wanted to talk about some of my preliminary findings on the HoloLens I discussed in last weeks post. I haven’t looked full force into the specific documentation as we haven’t confirmed if we are going to be using a HoloLens 1 or 2 so far. I have though seen many of the capabilities of the HoloLens 2 and am getting even more hopeful that we can get our –> hands <– on one!
There are many differences such as being able to use both of your hands to perform gestures. There is Eye tracking now, and many more gestures that one can perform now. I’ve also noticed that multiple commenting threads have been talking about it being much more comfortable to wear as well, which sounds important at the end of the day for the client.
While I would love to just give a huge list of the differences I should give a context of the ideas I have with our project should we get a HoloLens 2. Primarily it would have to do with many of the upgraded gestures. Being able to grab and manipulate the projections we make will be incredibly useful. As over time the user may find that they have a preference in where they want their projections being displayed. As I always find myself trying to make things more comfortable whenever I get into a new environment. Im still making adjustments to IDE’s I use and even to my own room layout. We all find a groove or specific layout we like over time, and giving the client the option to be able to setup specific layouts or sizes even for their testing environment sounds super useful.
Ergonomics aside, the differences CPU, RAM size (4GB vs 2GB) and improved WIFI capabilities will ensure that realtime updates are possible in order to let the user change tests or experiments at a moments notice. Maybe they notice odd behavior that they want to inspect before the strain potentially gets too heavy on the material and breaks it. Ensuring that no lag from the connection and no slowdown from the software itself seems paramount in what the user wants to achieve with this device.
I am not sure how difficult it will be on the processor and wifi(consistency at least in terms of wifi) to do what we want. As we haven’t been able to talk to the user directly to see what testing conditions are. For all we know there could be an incredibly large amount of strain gauges used and need to change how we approach modeling the data in general.
Otherwise I am feeling quite content this week in terms of what I have accomplished. I learned some about the differences in terms of developing in Unity and Unreal Engine, and while Unreal Engine comes off as more difficult(general consensus). I find it very appealing in terms of capability and technology.
For next week, we will have our first follow-up meeting and as much as I know we are waterfalling this project. But I hope we get to do some hands on programming and testing soon. So I will most likely cover more of my individual research of either one of the two engines, or ramble about class assignments and secretly reference my upcoming D&D campaign.