Despite our fancy technology, there are some pieces of data we have to gather the old-fashioned way: by asking visitors. One piece we’d like to know is why visitors chose to visit on this particular occasion. We’re building off of John Falk’s museum visitor motivation and identity work, which began with a survey that asks visitors to rate a series of statements on Likert (1-5) scales as to how applicable they are for them that day, and reveals a rather small set of motives driving the majority of visits. We also have used this framework in a study of three of our local informal science education venues, finding that an abbreviated version works equally well to determine which (if any) of these motivations drives visitors. The latest version, tried at the Indianapolis Museum of Art, uses photos along with the abbreviated number of statements for the visitors to identify their visit motivations.

We’re implementing a version on an iPad kiosk in the VC for a couple of reasons: first, we genuinely want to know why folks are visiting, and want to be able to correlate identity motivations with the automated behavior, timing, and tracking data we collect from the cameras. Second, we hope people will stop long enough for us to get a good reference photo for the facial recognition system. Sneaky, perhaps, but it’s not the only place we’re trying to position cameras for good reference shots. And if all goes well with our signage, visitors will be more aware than ever that we’re doing research, and that it is ultimately aimed at improving their experience. Hopefully that awareness will allay most of the final fears about the embedded research tools that we are hoping will be minimal to start with.

How do we get signs in front of visitors so they will actually read them? Think about how many signs at the front door of your favorite establishment you walk past without reading. How many street signs, billboards, and on-vehicle ads pass through our vision barely a blur? While exhibit designers spend many an hour toiling away to create the perfect signs to offer visitors some background and possible ways to interact with objects, many visitors gloss right over them, preferring to just start interacting or looking in their own way. This may be a fine alternative use for most cases, but in the case of our video research and the associated informed consent that our subjects need to offer, signs at the front door are going to be our best bet to inform visitors but not unduly interrupt their experience, or make museum entry and additional unreasonable burden for visitors or staff. Plus, the video recording is not optional at this point for folks who visit; you can visit and be recorded, or you can’t visit.

Thankfully, we have the benefit of the Exploratorium and other museums who have done video research in certain exhibits and have tested signs at their entrances and the percentage of visitors who subsequently know they’re being recorded for research. Two studies by the Exploratorium staff showed that their signs at entrances to specifically cordoned-off areas stating that videotaping for research was in progress were effective at informing 99% of visitors to the exhibit areas that a) videotaping was happening and b) it was for research. One interesting point is that their testing of the signs themselves and the language on them revealed that the camera icon needed to be rather old-school/highly professional looking to distinguish itself from the average visitor making home movies while visiting a museum and be clearly associated with official research purposes.


Source: store.sony.com via Free-Choice on Pinterest

Never mind the cameras we’re actually using look more like surveillance cameras.

 

So our strategy, crafted with our Institutional Review Board, is several-fold. Signs at the front entrance (and the back entrance, for staff and volunteers, and other HMSC visitors who might be touring the entire research facility for other reasons and popping in to the VC) will feature the large research camera and a few, hopefully succinct and clear words about the reasons we’re doing research, and where to get more information. We also have smaller signs on some of the cameras themselves with a short blurb about the fact that it’s there for research purposes. Next, we’re making handouts for people that will explain in more detail what our research is about and how the videos help us with that work. We’ll also put that information on our web site, and add the address of the video research information to our rack cards and other promotional material we send around town and Oregon. Of course, our staff and volunteers are also being included in the process so they are well-equipped to answer visitor questions.

Then there’s the thorny issue of students. University students who are over 18 who are visiting as part of a required class will have to individually consent due to federal FERPA regulations. We’re working with the IRB to make this as seamless a process as possible. We’ll be contacting local school superintendents to let them know about the research and let them inform parents of any class that will be attending on a field trip. These students on class field trips will be assumed to have parental consent by virtue of having signed school permission slips to attend Hatfield.

Hopefully this will all work. The Exploratorium’s work showed that even most people who didn’t realize they were being recorded were not bothered much by the recording, and even fewer would have avoided the area if they’d actually known before hand. As always, though, it will be a work-in-progress as we get visitor and volunteer feedback and move forward with the research.

Gutwill, J. (2003). “Gaining visitor consent for research II: Improving the posted-sign method.” Curator
46(2): 228-235

Gutwill, J. (2002). “Gaining visitor consent for research: Testing the posted-sign method.” Curator 45(3): 232-238.

That’s the question we’re facing next: what kind of audio systems we need to collect visitor conversations. The mics included on the AXIS cameras that we’re using are built-in to them and just not sensitive enough. Not entirely surprising, given that they’re normally used for video surveillance only (it’s illegal to record audio in security situations), but it does leave us to our own devices to figure something else out. Again.

Of course, we have the same issues as before: limited external power, location – has to be near enough to plug in to a camera to be incorporated into the system, plus now we need at least some of them to be waterproof, which isn’t a common feature of microphones (the cameras are protected by their domes and general housing). We also have to think about directionality; if we come up with something that’s too sensitive, we may have bleed over across several mics, which our software won’t be able to separate. If they’re not sensitive enough if in enough directions, though, we’ll either need a ton of mics (I mean, like 3-4 per camera) or we’ll have a very limited conversation capture area at each exhibit. And any good museum folk know that people don’t stand in one spot and talk, generally!

So we have a couple options that we’re starting with. One is a really messy cheap mic with a lot of wires exposed, which may present an aesthetic issue at the very least, and the other are more expensive models that may or may not be waterproof and more effective. We’re working with collaborators from The Exploratorium on this, but they’ve generally up to now only used audio recording in areas they tucked back from the noisiest parts of the exhibit floor and soundproofed quite a bit besides. They’re looking to expand as they move to their new building in the spring, however, so hopefully by putting our heads together and, as always, testing things boots on the ground, we’ll have some better ideas soon. Especially since we’ve stumped all the more traditional audio specialists we’ve put this problem to so far.

The network did turn out to be the cause of most of our camera skipping. We had all 25+ cameras running on MJPEG, which was driving our network usage through the roof at almost 10MB/sec per camera on a 100MB pipe. We did have to have the Convergint tech come out to help figure it out, and re-configure a few things, with some small hints.

First, we switched some of the cameras to H.264 when we were ok with a slightly less crisp picture, like our establishment shots to follow how people move from exhibit to exhibit. This drops the network usage to less than 1MB/sec per camera, though it does drive the CPU usage up a bit. That’s a fair tradeoff because our computers are dedicated to this video processing.

We also set up user accounts on the slave server as well as the master, which allowed us to spread the cameras across the two to distribute usage, and are working with our IT folks to bridge the servers to spread the load amongst the four that we have, so that even if we are driving nearly the full network usage, we are doing it on four servers instead of one. Finally, we put the live video on a different drive, also freeing up processing power.

Just a few tips and tweaks seem to have given us much smoother playback. Now to get more IP addresses from campus to spread the network load even further as we think ahead to more cameras.

We’re ready for round 2 of camera placement, having met with Lab advisor Sigrid Norris on Monday. We’ll go back to focusing on the wave- and touch-tank areas and getting full coverage of interactions. Basically, our first test left us spread too thin to really capture what’s going on, and our programmer said face detection and recognition is not robust enough to be able to track visitors through the whole center  yet anyway. Though now of course we’re running out of ethernet ports in the front half of the Visitor Center for those extra cameras.

One thing we had been noticing with the cameras was a lot of footage of “backs and butts” as people walk away from one camera or are facing a different exhibit. Sigrid’s take on this is that it is actually valuable data, capturing multimodal communication modes of posture and foot and body position. This is especially true for peripheral participants, such as group members who are watching more than driving the activity, or other visitors learning how to use exhibits by watching those who are there first.

We did figure out the network issue that was causing the video stoppage/skipping. The cameras had been set up all on the same server and assumed to share the load between the two servers for the system, but they needed to be set up on both servers in order to make the load sharing work. This requires some one-time administrative configuration work on the back end, but the client (what the researchers using the system see) still displays all camera feeds regardless of what server is driving which camera at any given time. So now it’s all hunky dory.

The wave tanks are also getting some redesigns after all the work and testing over the summer. The shore tank wave maker (the “actuator”) won’t be made of aluminum (too soft), and will have hydraulic braking to slow the handle as it reaches the end points. The wave energy tank buoys are getting finished, then that tank will be sealed and used to show electricity generation in houses and buildings set on top. We’ll also get new tables for all three tanks which will lack middle legs and give us probably a bit more space to work with for the final footprint. We’ll get the flooring replaced with wet lab flooring to prevent slip hazards and encourage drainage.

Well, not literal ghosts, but blank spots. It seems we may be facing our first serious bandwidth issues with 28 cameras installed and plenty of summer visitors. Whatever the reason, we’re getting hiccups in our getalongs – cameras are randomly freezing for a few seconds to several minutes each, losing connection with the system, and generally not behaving correctly.

Today, for example, we were collecting images of ourselves from both the video cameras and a still digital camera for comparison of performance for facial recognition. As Harrison, Mark, and Diana moved from right to left along our touch tanks, only one of three close-up “interaction” cameras that they stopped at actually picked them up. It’s not a case of them actually moving elsewhere, because we see them on the overhead “establishment” cameras. It’s not a case of the cameras not recording due to motion sensing issues (we think), because in one of the two missing shots, there was a family interacting with the touch tank for a few minutes before the staff trio came up behind them.

This morning I also discovered a lot of footage missing from today’s feeds, from cameras that I swear I saw on earlier. I’ve been sitting at the monitoring station pulling clips for upcoming presentations and for the facial recognition testing, and I see the latest footage of some of the octopus tank cameras showing as dimly lit 5 a.m. footage. It’s not a problem with synchronization, either (I think): the corresponding bar image on the viewer that shows a simple map of recording times across multiple cameras shows blanks for those times, when I was watching families on them earlier today. However, when I look at it now, hours later, there don’t seem to be nearly as big of gaps as I saw this morning, meaning this mornings viewing while recording might have just delayed playback for some of the recent-but-not-most-immediately-recent footage at that time, but the system cached it and caught up later.

Is it because some of the cameras are network-powered and some are plugged in? Is it because the motion sensitivity is light-sensitive, wherein some cameras that have too much light have a harder time sensing motion, or the motion sensitivity is based on depth-of-field and the action we want is too far afield? Maybe it’s a combination of trying to view footage while it’s being recorded and bandwidth issues and motion-sensitivity issues, but it ain’t pretty.