Ladies and gentlemen, I present for your consideration an example of our signature rapid prototyping process. The handyman’s secret weapon gets a lot of use around here, and I even had a roll of Gorilla Tape on my wrist in case of emergencies.  Fortunately, it didn’t come to that.

The angles necessary for good face detection and recognition (up to about 15 degrees from straight-on) require careful consideration of camera placement.  The necessary process of checking angles and lighting isn’t always pretty, but I, for one, find the above image beautiful.

With Mark’s guidance over the phone, I spent a few hours today testing camera placement with a small Axis camera and its built-in microphone. One of my favorite security features of this camera is its built-in speaker, which can be used to make the camera shout “intruder,” whisper “pssst,” or bark like a dog.  None of these have any conceivable utility whatsoever for what we’re doing, but it’s always nice to know we have options.

So, I put it in the entryway.  I put it over and next to the octopus tank.  I put it over the front desk. I put it by the touch pool, which triggered a barrage of eyeball-seeking dust particles that had been guarding the overhead ethernet ports for untold eons.

Each vantage point tested presented a decent view and adequate lighting.  The model I used will not be installed in all positions, but it provides a great baseline.  We also received a new Axis dome camera with a microphone, which we can use up-close at individual exhibits.

To record a few audio tests, I directed the system output of one of our Macbooks into Audacity using Soundflower. Having recently spent several late nights playing with open-source audio software, I improvised this solution a bit more easily than I had anticipated. I never expected that my private dubstep habit would prove to be a reservoir of generalizable workplace skills, but it goes to show that free-choice learning happens all the time.

I’m back to the sales calls, this time for Video Management Systems, the back-end software that will coordinate all our cameras. This field seems more competitive than that of eye tracking, or maybe there is just more demand, as VMS is what runs your basic surveillance system you find anywhere from the convenience store to the casino. So people are scrambling for our business.

However, whenever we try to describe what we’re doing and what our needs are, we run into some problems. You want to record audio? Well, that’s illegal in surveillance systems (it’s ok for research as long as you get consent), so it’s not something we deal a lot with. Don’t mount your camera near a heating or cooling vent or it will drown out the video. The microphones on the cameras are poor, and by the way, it doesn’t sync correctly with the video – “it’s like watching a bad Godzilla movie,” said the engineer we spoke with this morning. You want to add criteria to flag video and grab certain pieces? Well, you can’t access the video stream because if you do, then it’s not forensically admissable and can’t be used in court (Ok, we just need an exported copy, we’re not going to prosecute anyone even if they chew gum in the Visitor Center). You want to record high-resolution images? Well, you can either buy a huge amount of storage or a huge amount of processing capability. Minor obstacles, really, but a lot of decision points, even more than eye trackers. Again, though, it’s a learning experience in itself, so hopefully we’re generating some data that will save someone else some time in the future.

The pricing and purchasing is a bit strange, too. The companies seem to all have “sales” teams, but many can’t actually sell anything more than the software, some don’t even sell their software directly. Instead, we have to deal then with retailers and sometimes “integrators” that can sell us hardware, too, or at least specify requirements for us. Then there’s the matter of cameras – we haven’t decided on those, either, and it’s becoming clear that we’ll have several different types of cameras. Juggling all these decisions at once is quite a trick, literally.

At least it’s a moderately amusing process; many of the sales folks are here or were visiting in the Northwest recently, and we’ve commiserated over the last week about all the rain/snow/ice that ground the area to a halt from Seattle to Eugene.

 

Beverly Serrell, a pioneer in tracking museum visitors (or stalking them, as some of us like to say), has just released a nice report on the Center for the Advancement of Informal Science Education (CAISE) web site. In “Paying More Attention to Paying Attention,” Serrell describes the growing use of metrics she calls tracking and timing (T&T) in the museum field since the publication of her book on the topic in 1998. As the field has more widely adopted these T&T strategies, Serrell has continued her work doing meta-analysis of these studies and has developed a system to describe some of the main implications of the summed findings for exhibition design.

I’ll leave you to read the details, but it really drove home to me the potential excitement and importance of the cyberlab’s tracking setup. Especially for smaller museums that have minimal staff, implementing an automatic tracking schemes, even on a temporary basis, could save a lot of person-hours in collecting this simple, yet vital data about exhibition and exhibit element use. It could allow more data collection of this type in the prototyping stages, especially, which might yield important data on the optimum density of exhibit pieces before a full exhibition is installed. On the other hand, if we can’t get it to work, or our automated design proves ridiculously unwieldy (stay tuned for some upcoming posts on our plans for 100 cameras in our relatively-small 15000 square foot space), it will only affirm the need for good literal legwork that Serrell also notes is a great introduction to research for aspiring practicioners. In any case, the eye tracking as an additional layer of information that we use to help explain engagement and interest in particular exhibit pieces might lead eventually to a measure that lends more insight into Serrell’s Thorough Use.

(Thanks to the Museum Education Monitor and Jen Wyld for the tip about this report.)

 

I’ve been looking in to technologies that help observe a free choice learning experience from the learner perspective. My research interests center on interactions between learners and informal educators, and so I wanted a technology that helped record interactions from the learner perspective, but were as least obtrusive on that interaction as possible.

Originally I was interested in using handheld technologies (such as smartphones) for this task. Here the idea was to have the learner wear a handheld on a lanyard which would automatically tag and record their interactions with informal educators via QR codes or augmented reality symbols. However, this proved more complicated than originally thought (and produced somewhat dodgy video recordings!), so we looked for a simpler approach.

I am currently exploring how Bluetooth headsets can help this process. The “looxcie” is basically a Bluetooth headset equipped with a camera, which can be paired to a handheld device for recording or work independently. Harrison is expertly modeling this device in the photos. I am in the process of starting to pilot this technology in the visitor center, and have spent some time with the volunteer interpreters at HMSC demonstrating how this might be used for my research. Maureen and Becca helped me produce a test video at the octopus tank (link below).