It’s time to buy more cameras, so Mark and I went to our observation booth and wrestled with what to buy. We had four variables: dome (zoomable) vs. brick (non-zoomable) and low-res (640×480) vs. high-res (but wide screen). He had four issues: 1) some places have no power access, so those angles required high-resolution brick cameras (what a strange feature of high-res camera to not require plug-in power!), 2) we had some “interaction” (i.e. close-up exhibit observations) that looked fine at low-res but others that looked bad, 3) lighting varies from area to area and sometimes within the camera view (this dynamic lighting is handled better with high-res), and 4) current position and/or view of the cameras wasn’t always as great as we’d first thought. This, we thought, was a pretty sticky and annoying problem that we needed to solve to make our next purchase.

Mark was planning to buy 12 cameras, and wanted to know what mix of brick/dome and high/low-res we needed, keeping in mind the high-res cameras are about $200 more each. We kept looking at many of the 25 current views and each seemed to have a different issue or, really, combination of the four. So we went back and forth on a bunch of the current cameras, trying to decide which ones were fine, which ones needed high-res, and which we could get away with low-res. After about 10 minutes and no real concrete progress, I wanted a list of the cameras we weren’t satisfied with and then what we wanted to replace each, including ones that were high-res when they didn’t need to be (meaning that we could repurpose a high-res elsewhere). Suddenly, it dawned on me that this was a) not going to be our final purchase, b) still likely just a guess until things were re-installed and additionally installed and lived with. So I asked why we didn’t just get 12 high-res, and if we didn’t like them in the spots we replaced and were still unsatisfied with whatever we repurposed after the high-res, we could move them again, even to the remaining exhibit areas that we haven’t begun to cover yet. Then we can purchase the cheaper low-res cameras later and save the money at the end of the grant, but have plenty of high-res for where we need it. I just realized we were sitting around arguing over a couple thousand dollars that we would probably end up spending anyway to purchase high-res cameras later, so we didn’t have to worry about it right at this minute. It ended up being a pretty easy decision.

If you’ve been following our blog, you know the lab has wondered and worried and crossed fingers about the ability of facial recognition not only to track faces, but also eventually to give us clues to visitors’ emotions and attitudes. The recognition and tracking of individuals looks to be promising with the new system, getting up to about 90% accuracy, with good profiles for race and age (incidentally, the cost, including time invested in the old system we abandoned, is about the same with this new system). However, we don’t have any idea whether we’ll get any automated data on emotions, despite the relative similarity of expression of these emotions on human faces.

But I ran across this very cool technology that may help us in our quest: glasses that sense changes in oxygen levels in blood under the skin and can sense emotional states. The glasses amplify what primates have been doing for years, namely sensing embarrassment from flushed redder skin, or fear in greener-tinted skin than normal. Research from Mark Changizi at my alma mater, Caltech, on the evolution of color vision to allow us to do just that sort of emotion sensing has led to the glasses. Currently, they’re being tested for medical applications, helping doctors sense anemia, anger, and fear, but if the glasses are adapted for “real-world” use, such as in decrypting a poker player’s blank stare, it seems to me that the filters could be added to our camera setups or software systems to help automate this sort of emotion detection.

Really, it would be one more weapon in the arsenal of the data war we’re trying to fight. Just as Earth and ocean scientists have made leaps in understanding from being able to use satellites to sample the whole Earth virtually every day instead of taking ship-based or buoy-based measurements far apart in space and time, so do we hope to make leaps and bounds in understanding how visitors learn. If we can get our technology to automate data collection and vastly improve the spatial and temporal resolution of our data, hopefully we’ll move into our own satellite era.

Thanks to GOOD magazine and PSFK for the tips.

Wow, I thought I had seen lousy customer service, but Nuance is taking the cake. I have tried to contact the audio mining software company for the last two weeks, through their Enterprise sales web site (no response to my email), and on the phone to about 4 different phone numbers (no response to my voice messages). Tuesday the 7th I spent about 3 hours on hold waiting for their Enterprise sales departments (including multiple calls where I’d been hung up on while on hold, and voice messages on this line asking them to contact me), after trying other departments and complaining. Their other departments, however, don’t even know that the product I’m interested in exists. They did try to contact the Enterprise sales department themselves, however, but even customer service couldn’t get through! The hold “music” ironically talks about how customers go online to complain about poor customer service, and how their products can help you keyword search those web sites. And their website tag line: “As a speech company, we put a premium on listening – get in touch with Nuance today!” Uh, no you don’t.

I finally got ahold of someone in the Enterprise sales Thursday the 9th after a similar hold experience and at least 30 minutes on hold that time. Claudia told me they needed to know my budget, which I didn’t actually know. She said they wouldn’t call us back if we didn’t have at least 5-10K. I told her sure, then we’d have that if that’s what we needed. She took my name, phone number, and email, as well as Mark’s. Now, another week later, no response.

I’ve been hung up on 3 times this morning. I tried the customer service department again, and now I’m on hold while they try to contact the Enterprise sales people for me. The hold music has switched back from the stuff that customer service plays to that of the Enterprise line, so I’m suspecting customer service is not getting back to me, either. (One hour later, no response). By the way, Nuance, this does not bode well for our confidence in your technical support or customer service in general.

So, I’m putting it out to the web: if anyone has an audio mining software solution to search several camera audio feeds for keywords (basically something that competes with Nuance’s Dragon Audio Mining SDK), please contact us through this blog. Let’s talk. It’s more than your competition is willing to do.

[Specifically, we need something speaker-independent and that processes files automatically].