Categories
Uncategorized

Eighth Blog Post Term 2

This week I have been spending a lot of time learning how to create a complex loss and displaying different metrics from Tensorflow. A gui to display the output labels from the NN was difficult because there is quite a bit of back end to Tensorflow that changes what’s passed in and passed out of things defined by the user. In the end, I finally managed to get a bunch of different metrics to be displayed every batch on a tkinter basic gui, along with some image representations of the actual and prediction labels.

With more information being displayed for the network, it was easier to understand why my complex loss function wasn’t working. After spending ~15 hours experimenting with different aspects of the network, I finally figured out that I was using a softmax activation on my output like an idiot. Now, with all the experience I had messing with the loss functionality, I quickly changed things around so the model learns the different aspects of the output labels. This link leads to a video showing the network learn in real time:
https://www.youtube.com/watch?v=evwahPfXUXg&t=340s
The first index is object probability, meaning if the index is 0, there is no object in frame, which is why the difference image (the leftmosst one) still has dots appearing at the end.