This week flew by and there was a lot to do. Our group finally met with our sponsor, and the project sounds really interesting. We’re working on a machine learning algorithm that uses a LiDAR camera, capable of sensing distance, to recognize and interpret sign language. This project seems like it will be a challenge since machine learning is not something I’m very familiar with. Learning how to integrate the LiDAR camera into code is also something I’ve never done but I’m excited to experiment with.
The project is very open ended, which is both a good and bad thing. We are given a lot of freedom around where we’d like to take the project. Last year another capstone group started the project and is passing it on to us. If we chose to, we could redesign the project from scratch, using different tensor libraries or languages, move computation to the cloud, or enable it to interpret video information, rather than just stills. The downside being though that the direction and requirements of the project are not very well defined. As a group we’ll have to discuss what we want to accomplish and what we are able to accomplish in our time frame. Estimating how much time it will take to complete a given project, even of much smaller scale, is always a difficult task, so planning out nine months is going to be a challenge.
I think this project is really cool and a fun challenging problem, but I’m a little confused about the actual usefulness or application of the system. It’s supposed to be assistive for people who have difficulty communicating by speech, but it’s primarily deaf people who communicate through sign language… but deaf people can still see, which means they could watch someone signing and understand it, or read. So I’m imagining a deaf person spending over $300 on a LiDAR camera, setting up our project and running the code, signing into the camera, and then reading what they just signed on their computer monitor. I can understand speech recognition because there’s a lot of audio based media and information that’s inaccessible to people with impaired hearing, but I’m struggling to see the use case for this. Doesn’t matter to me though, since I’m not trying to make any money off of it and also not paying for any of the stuff. If someone wants to hand me a fancy camera and teach myself how to do machine learning, I’m all over it.