Source: flickr.com via Free-Choice on Pinterest

Mark and I cleaned out the A/V closet in the auditorium this morning.  We uncovered a stash of broken slide projectors dating back several decades.  The picture above shows a partial sample of what we found, which included bulbs, lenses, carousels, carrying cases and at least one IR remote.

I suspect that the State of Oregon maintains a secret Bureau of Antiquated Projectors (BAP), which is dedicated to filling every available storage space with 1970s hardware (with fake wood paneling, when available).  There’s one in the closet, just in case.  There are a few upstairs awaiting surplus inventory.  Maureen requisitioned one for office use.  There’s probably a stack of them hiding in that first bathroom stall that nobody ever uses because everybody thinks it’s the one everybody else uses.

I actually like slides, film and paper because they have a permanence that digital formats cannot yet replace.  We can still see, touch and read documents (preserved in stone, paper or rope) from thousands of years ago.  Sumerian government memos.  Inca agricultural inventories.  Personal correspondence from the Roman Empire.  Ancient Egyptian jokes.

Will future generations know us by our projectors?  Call the BAP.

Inventory of my left pocket at day’s end:

-1 shock- and water-resistant cell phone
-1 small Post-It note reading as follows:
112
140
170
-1 large Post-It note reading as follows:
xxx.xxx.xxx.xxx [Internal IP address]
20:32 (THROAT CLEAR)
19:41 (MMHMM)
-1 printed to-do list from Mark with seven items (three crossed off)
-1 scrap of paper with the letters “MTS” written on it

I’ll leave it to you to figure out what all these things might mean. In summary, we had a lot to do today. In addition to the tasks that left written records on my person, we also set up cameras and prepared the website to go public. We still have much to do.

Most of our cameras receive power over ethernet. However, we’re connecting them to the network wirelessly, which requires us to plug each one into a power outlet. The placement of our outlets still makes this preferable.

I suggested making the cameras completely wireless by installing an immense Tesla coil in the auditorium. Sadly, my colleagues did not express confidence that a looming chrome monstrosity surrounded by roaring blue lightning would make science more approachable to the general public.

 

We were all happy to see that our April Fool’s video had received more than 5,000 hits on YouTube as of this morning.  Thanks for all the great comments, and thanks to PZ Myers for giving us a nod.  I was amazed by the relative scarcity of comments from those whom Dave Barry calls the “humor impaired.”  As of this posting, one insightful commenter has managed to identify the video as “totally fake,” perhaps aided by the title card at the end stating as much.  We would’ve gotten away with it too, if it weren’t for you meddling kids!

Shawn, Mark, Laura and I met this morning to discuss camera placement.  The Visitor Center D&D campaign map came into play, with pennies representing camera mounts.  Now that we’ve figured out the field of view and other pertinent characteristics for our cameras, it’s a matter of fine-tuning our coverage and figuring out which camera works best in which location.

Associating video and audio is another issue.  One approach would be to automatically associate the audio feed from each microphone directly with the camera(s) that cover(s) the same cell(s) in our Visitor Center grid.  Another would be to present each audio and video feed separately, allowing researchers to easily review any audio feed in conjunction with any video feed.  What qualifies as “intuitive” can be highly variable.

Hypothetically, my initial response to a fresh mound of audio/video data would be to visually scan audio tracks for activity, then flip through the videos to see what was going on at those times.  In any case, our software should be versatile enough to accommodate a range of approaches.

 

We have a new project to announce, and we’re all really excited about it.  By modifying the parameters of our face-detection software and running the Octocam through it, we can translate changes in Pearl’s pupil dilation, posture, color and texture into synthesized human speech.

It’s been in the works for a while, but we didn’t want to leak any details until we had something solid to report.  As far as I know, nobody has attempted anything like this before with an invertebrate.  The results so far have been very intriguing.  You can watch the video here.

 

Ladies and gentlemen, I present for your consideration an example of our signature rapid prototyping process. The handyman’s secret weapon gets a lot of use around here, and I even had a roll of Gorilla Tape on my wrist in case of emergencies.  Fortunately, it didn’t come to that.

The angles necessary for good face detection and recognition (up to about 15 degrees from straight-on) require careful consideration of camera placement.  The necessary process of checking angles and lighting isn’t always pretty, but I, for one, find the above image beautiful.