We started the day with a couple of near-disasters but managed to make some good progress despite. We lost control of a hose while filling the tsunami wave tank and doused one of the controlling computers. Luckily, it was off at the time, but it also shouldn’t have had its case open, and we also should have been more aware of the hose! Ah, live and learn. No visitors were harmed, either.

It did help us identify that our internet is not quite up-to-snuff for the camera system; we’re supposed to have four GB ethernet connections but right now only have one. We went to review the footage to see what happened with the tanks, but the camera that had the right angle completely blanked out during just the time of the accident! Several of the other cameras are losing connection with the server intermittently as well. We’re not at the point of collecting real data, though, so again it’s just part of the learning process.

We also got more cameras installed, so we’re up to almost 30 in operation now. Not all are in their final place, but we’re getting more and more closer and closer as we live with them for a while and see how people interact. We also got the iPad interface set up so we can look at the cameras remotely using the Milestone XProtect app:

 

This will allow us to access the video footage from almost anywhere. It runs amazingly smoothly even on OSU’s finicky wireless network, and even seems to have slightly better image quality than the monitors (or maybe just better than my old laptop).

It’s a pretty powerful app, too, allowing us to choose the time we want to jump to, show picture in picture of the live feed, speed up or slow down playback, and capture snapshots we can email or save to the iPad Photo Library. Laura will install the full remote-viewing software on her laptop, too, to test that part of the operation out. That’s the one downside so far; most of our lab runs on Macs, while the Milestone system and the eyetracker are both on PCs, so we’ll have to buy a couple more laptops. Where’s that credit card?

 

So last week I posted about the evaluation project underway at Portland Art Museum (PAM) and wanted to give a few more details about how we are using the looxcie cameras.

 

Looxcies are basically bluetooth headsets, just like the ones regularly seen used with cell phones, but with a built in camera. I am currently using them as part of my research emcompassing docent-visitor interactions, and decided to use them as a data collection tool because of their ability to generate a good quality “visitor-eye-view” of the museum experience. I personally feel their potential as a research/evaluation tool in informal settings are endless, and had some wonderful conversations with other education professionals at the National Marine Educators Association conference in Anchorage, AK recently about where some other possibilities could lie – including as part of professional development practice for educators and exhibit development.

At PAM, the looxices will be used to capture that view when visitors interact with exhibition pieces, specifically those related to their Museum Stories and Conversation About Art video-based programs. Here, fitting visitors with looxcies will enable us to capture the interactions and conversations visitors have about the art on display as they visit the museum. The video data gained here can then be analyzed for repeating themes around what and how visitors talk about art in the museum setting.

During our meeting with Jess Park and Ally Schultz at PAM, we created some test footage to help with training other museum staff for the evaluation procedures. In the clip below, Jess and Ally are looking at and discussing some sculpture pieces, and were both wearing looxcies to give them a sense of how they feel to the user. This particular clip is from Ally’s perspective, and you’ll notice even Shawn and I have a go a butting in and talking about art with them!

What’s exciting about working with the looxcies, and with video observations in general, is how much detail you can capture about the visitor experience, down to what they are specifically looking at, how long they look at it, and even if they nod their head in agreement with the person they are conversing with. Multimodal discourse eat your heart out!

 

It’s time to buy more cameras, so Mark and I went to our observation booth and wrestled with what to buy. We had four variables: dome (zoomable) vs. brick (non-zoomable) and low-res (640×480) vs. high-res (but wide screen). He had four issues: 1) some places have no power access, so those angles required high-resolution brick cameras (what a strange feature of high-res camera to not require plug-in power!), 2) we had some “interaction” (i.e. close-up exhibit observations) that looked fine at low-res but others that looked bad, 3) lighting varies from area to area and sometimes within the camera view (this dynamic lighting is handled better with high-res), and 4) current position and/or view of the cameras wasn’t always as great as we’d first thought. This, we thought, was a pretty sticky and annoying problem that we needed to solve to make our next purchase.

Mark was planning to buy 12 cameras, and wanted to know what mix of brick/dome and high/low-res we needed, keeping in mind the high-res cameras are about $200 more each. We kept looking at many of the 25 current views and each seemed to have a different issue or, really, combination of the four. So we went back and forth on a bunch of the current cameras, trying to decide which ones were fine, which ones needed high-res, and which we could get away with low-res. After about 10 minutes and no real concrete progress, I wanted a list of the cameras we weren’t satisfied with and then what we wanted to replace each, including ones that were high-res when they didn’t need to be (meaning that we could repurpose a high-res elsewhere). Suddenly, it dawned on me that this was a) not going to be our final purchase, b) still likely just a guess until things were re-installed and additionally installed and lived with. So I asked why we didn’t just get 12 high-res, and if we didn’t like them in the spots we replaced and were still unsatisfied with whatever we repurposed after the high-res, we could move them again, even to the remaining exhibit areas that we haven’t begun to cover yet. Then we can purchase the cheaper low-res cameras later and save the money at the end of the grant, but have plenty of high-res for where we need it. I just realized we were sitting around arguing over a couple thousand dollars that we would probably end up spending anyway to purchase high-res cameras later, so we didn’t have to worry about it right at this minute. It ended up being a pretty easy decision.

Yesterday Shawn and I met with Jess Park at Portland Art Museum (PAM) about an exciting new evaluation project utilizing our looxcie cameras. We had some great conversation about how to capture visitor conversation and interactions in relation to PAM’s Museum Stories and Conversations About Art video-based program. The project will be one the first official evaluation partnerships we have developed under the flag of FCL lab!

PAM has developed these video-based experiences for visitors in order to deepen visitors’ engagement with objects, with each other, and with the museum.  Museum Stories features short video presentations of museum staff talking about specific objects in the collection that have some personal meaning for them. All videos are available on touch screen computers in one gallery of the museum, which also houses the areas where the stories are recorded as well as some of the objects from the museum featured in the stories.  These videos are also available on-line.  Conversations about Art is a series of short videos featuring conversations among experts focused on particular objects in the museum’s collection.  These are available on hand-held devices provided by the museum, as downloads to visitors’ personal hand-held devices, and on the museum website. PAM is now looking to expand the program, and wishes to document some of the predicted and unexpected impacts and outcomes of these projects for visitors. The evaluation will recruit visitors to wear the looxcie cameras during their visit to the pertinent exhibits, including that of object stories. We will likely also be interviewing some of the experts/artists involved in creating the  videos.

We spent time going over the looxcie technologies and how best to recruit visitors in the Art Museum space. We also created some test clips to help the PAM folks working on the evaluation better understand the potential of the video data collection process. I will post a follow up next week with some more details about how we’re using the looxcies.

Shawn and I come back from PAM feeling like the A-Team – we love it when an evaluation plan comes together.

Just a few short updates:

  • We now have a full 25 cameras in the front half of the Visitor’s Center, which gives us pretty great coverage of these areas. Both wide establishment shots and close-up interaction angles cover the touch tanks, wave tanks (still not quite fully open to the public) and a few freshwater creatures tanks that are more traditional tanks where visitors simply observe the animals.
  • Laura got a spiffy new EcoSmart pen that syncs audio with written notes taken on special (now printable from your printer) paper. She showed us how it translates into several languages, lets you play a piano after you’ve drawn the right pattern on the paper, and displays what you’ve written on its digital screen, performing pretty slick handwriting analysis in the process.
  • Katie ran the lab’s first two eyetracking experimental subjects yesterday, one expert and one novice pilot (not quite from the exact study population, but approximately). Not only did the system work (whew!), we’ve even got some interesting qualitative patterns that are different between the two. This is very promising, though of course we’ll have to dig into the quantitative statistics and determine what, if any, differences in dwell times are significant.

Sometimes, it’s a lot of small steps, but altogether they make forward progress!

 

 

If you’ve been following our blog, you know the lab has wondered and worried and crossed fingers about the ability of facial recognition not only to track faces, but also eventually to give us clues to visitors’ emotions and attitudes. The recognition and tracking of individuals looks to be promising with the new system, getting up to about 90% accuracy, with good profiles for race and age (incidentally, the cost, including time invested in the old system we abandoned, is about the same with this new system). However, we don’t have any idea whether we’ll get any automated data on emotions, despite the relative similarity of expression of these emotions on human faces.

But I ran across this very cool technology that may help us in our quest: glasses that sense changes in oxygen levels in blood under the skin and can sense emotional states. The glasses amplify what primates have been doing for years, namely sensing embarrassment from flushed redder skin, or fear in greener-tinted skin than normal. Research from Mark Changizi at my alma mater, Caltech, on the evolution of color vision to allow us to do just that sort of emotion sensing has led to the glasses. Currently, they’re being tested for medical applications, helping doctors sense anemia, anger, and fear, but if the glasses are adapted for “real-world” use, such as in decrypting a poker player’s blank stare, it seems to me that the filters could be added to our camera setups or software systems to help automate this sort of emotion detection.

Really, it would be one more weapon in the arsenal of the data war we’re trying to fight. Just as Earth and ocean scientists have made leaps in understanding from being able to use satellites to sample the whole Earth virtually every day instead of taking ship-based or buoy-based measurements far apart in space and time, so do we hope to make leaps and bounds in understanding how visitors learn. If we can get our technology to automate data collection and vastly improve the spatial and temporal resolution of our data, hopefully we’ll move into our own satellite era.

Thanks to GOOD magazine and PSFK for the tips.