We have a new project to announce, and we’re all really excited about it.  By modifying the parameters of our face-detection software and running the Octocam through it, we can translate changes in Pearl’s pupil dilation, posture, color and texture into synthesized human speech.

It’s been in the works for a while, but we didn’t want to leak any details until we had something solid to report.  As far as I know, nobody has attempted anything like this before with an invertebrate.  The results so far have been very intriguing.  You can watch the video here.

Leave a reply