Despite our fancy technology, there are some pieces of data we have to gather the old-fashioned way: by asking visitors. One piece we’d like to know is why visitors chose to visit on this particular occasion. We’re building off of John Falk’s museum visitor motivation and identity work, which began with a survey that asks visitors to rate a series of statements on Likert (1-5) scales as to how applicable they are for them that day, and reveals a rather small set of motives driving the majority of visits. We also have used this framework in a study of three of our local informal science education venues, finding that an abbreviated version works equally well to determine which (if any) of these motivations drives visitors. The latest version, tried at the Indianapolis Museum of Art, uses photos along with the abbreviated number of statements for the visitors to identify their visit motivations.
We’re implementing a version on an iPad kiosk in the VC for a couple of reasons: first, we genuinely want to know why folks are visiting, and want to be able to correlate identity motivations with the automated behavior, timing, and tracking data we collect from the cameras. Second, we hope people will stop long enough for us to get a good reference photo for the facial recognition system. Sneaky, perhaps, but it’s not the only place we’re trying to position cameras for good reference shots. And if all goes well with our signage, visitors will be more aware than ever that we’re doing research, and that it is ultimately aimed at improving their experience. Hopefully that awareness will allay most of the final fears about the embedded research tools that we are hoping will be minimal to start with.
How do we get signs in front of visitors so they will actually read them? Think about how many signs at the front door of your favorite establishment you walk past without reading. How many street signs, billboards, and on-vehicle ads pass through our vision barely a blur? While exhibit designers spend many an hour toiling away to create the perfect signs to offer visitors some background and possible ways to interact with objects, many visitors gloss right over them, preferring to just start interacting or looking in their own way. This may be a fine alternative use for most cases, but in the case of our video research and the associated informed consent that our subjects need to offer, signs at the front door are going to be our best bet to inform visitors but not unduly interrupt their experience, or make museum entry and additional unreasonable burden for visitors or staff. Plus, the video recording is not optional at this point for folks who visit; you can visit and be recorded, or you can’t visit.
Thankfully, we have the benefit of the Exploratorium and other museums who have done video research in certain exhibits and have tested signs at their entrances and the percentage of visitors who subsequently know they’re being recorded for research. Two studies by the Exploratorium staff showed that their signs at entrances to specifically cordoned-off areas stating that videotaping for research was in progress were effective at informing 99% of visitors to the exhibit areas that a) videotaping was happening and b) it was for research. One interesting point is that their testing of the signs themselves and the language on them revealed that the camera icon needed to be rather old-school/highly professional looking to distinguish itself from the average visitor making home movies while visiting a museum and be clearly associated with official research purposes.
Source: store.sony.com via Free-Choice on Pinterest
Never mind the cameras we’re actually using look more like surveillance cameras.
So our strategy, crafted with our Institutional Review Board, is several-fold. Signs at the front entrance (and the back entrance, for staff and volunteers, and other HMSC visitors who might be touring the entire research facility for other reasons and popping in to the VC) will feature the large research camera and a few, hopefully succinct and clear words about the reasons we’re doing research, and where to get more information. We also have smaller signs on some of the cameras themselves with a short blurb about the fact that it’s there for research purposes. Next, we’re making handouts for people that will explain in more detail what our research is about and how the videos help us with that work. We’ll also put that information on our web site, and add the address of the video research information to our rack cards and other promotional material we send around town and Oregon. Of course, our staff and volunteers are also being included in the process so they are well-equipped to answer visitor questions.
Then there’s the thorny issue of students. University students who are over 18 who are visiting as part of a required class will have to individually consent due to federal FERPA regulations. We’re working with the IRB to make this as seamless a process as possible. We’ll be contacting local school superintendents to let them know about the research and let them inform parents of any class that will be attending on a field trip. These students on class field trips will be assumed to have parental consent by virtue of having signed school permission slips to attend Hatfield.
Hopefully this will all work. The Exploratorium’s work showed that even most people who didn’t realize they were being recorded were not bothered much by the recording, and even fewer would have avoided the area if they’d actually known before hand. As always, though, it will be a work-in-progress as we get visitor and volunteer feedback and move forward with the research.
Gutwill, J. (2003). “Gaining visitor consent for research II: Improving the posted-sign method.” Curator
Gutwill, J. (2002). “Gaining visitor consent for research: Testing the posted-sign method.” Curator 45(3): 232-238.
We’re ready for round 2 of camera placement, having met with Lab advisor Sigrid Norris on Monday. We’ll go back to focusing on the wave- and touch-tank areas and getting full coverage of interactions. Basically, our first test left us spread too thin to really capture what’s going on, and our programmer said face detection and recognition is not robust enough to be able to track visitors through the whole center yet anyway. Though now of course we’re running out of ethernet ports in the front half of the Visitor Center for those extra cameras.
One thing we had been noticing with the cameras was a lot of footage of “backs and butts” as people walk away from one camera or are facing a different exhibit. Sigrid’s take on this is that it is actually valuable data, capturing multimodal communication modes of posture and foot and body position. This is especially true for peripheral participants, such as group members who are watching more than driving the activity, or other visitors learning how to use exhibits by watching those who are there first.
We did figure out the network issue that was causing the video stoppage/skipping. The cameras had been set up all on the same server and assumed to share the load between the two servers for the system, but they needed to be set up on both servers in order to make the load sharing work. This requires some one-time administrative configuration work on the back end, but the client (what the researchers using the system see) still displays all camera feeds regardless of what server is driving which camera at any given time. So now it’s all hunky dory.
The wave tanks are also getting some redesigns after all the work and testing over the summer. The shore tank wave maker (the “actuator”) won’t be made of aluminum (too soft), and will have hydraulic braking to slow the handle as it reaches the end points. The wave energy tank buoys are getting finished, then that tank will be sealed and used to show electricity generation in houses and buildings set on top. We’ll also get new tables for all three tanks which will lack middle legs and give us probably a bit more space to work with for the final footprint. We’ll get the flooring replaced with wet lab flooring to prevent slip hazards and encourage drainage.
Well, not literal ghosts, but blank spots. It seems we may be facing our first serious bandwidth issues with 28 cameras installed and plenty of summer visitors. Whatever the reason, we’re getting hiccups in our getalongs – cameras are randomly freezing for a few seconds to several minutes each, losing connection with the system, and generally not behaving correctly.
Today, for example, we were collecting images of ourselves from both the video cameras and a still digital camera for comparison of performance for facial recognition. As Harrison, Mark, and Diana moved from right to left along our touch tanks, only one of three close-up “interaction” cameras that they stopped at actually picked them up. It’s not a case of them actually moving elsewhere, because we see them on the overhead “establishment” cameras. It’s not a case of the cameras not recording due to motion sensing issues (we think), because in one of the two missing shots, there was a family interacting with the touch tank for a few minutes before the staff trio came up behind them.
This morning I also discovered a lot of footage missing from today’s feeds, from cameras that I swear I saw on earlier. I’ve been sitting at the monitoring station pulling clips for upcoming presentations and for the facial recognition testing, and I see the latest footage of some of the octopus tank cameras showing as dimly lit 5 a.m. footage. It’s not a problem with synchronization, either (I think): the corresponding bar image on the viewer that shows a simple map of recording times across multiple cameras shows blanks for those times, when I was watching families on them earlier today. However, when I look at it now, hours later, there don’t seem to be nearly as big of gaps as I saw this morning, meaning this mornings viewing while recording might have just delayed playback for some of the recent-but-not-most-immediately-recent footage at that time, but the system cached it and caught up later.
Is it because some of the cameras are network-powered and some are plugged in? Is it because the motion sensitivity is light-sensitive, wherein some cameras that have too much light have a harder time sensing motion, or the motion sensitivity is based on depth-of-field and the action we want is too far afield? Maybe it’s a combination of trying to view footage while it’s being recorded and bandwidth issues and motion-sensitivity issues, but it ain’t pretty.
If you’ve been following our blog, you know the lab has wondered and worried and crossed fingers about the ability of facial recognition not only to track faces, but also eventually to give us clues to visitors’ emotions and attitudes. The recognition and tracking of individuals looks to be promising with the new system, getting up to about 90% accuracy, with good profiles for race and age (incidentally, the cost, including time invested in the old system we abandoned, is about the same with this new system). However, we don’t have any idea whether we’ll get any automated data on emotions, despite the relative similarity of expression of these emotions on human faces.
But I ran across this very cool technology that may help us in our quest: glasses that sense changes in oxygen levels in blood under the skin and can sense emotional states. The glasses amplify what primates have been doing for years, namely sensing embarrassment from flushed redder skin, or fear in greener-tinted skin than normal. Research from Mark Changizi at my alma mater, Caltech, on the evolution of color vision to allow us to do just that sort of emotion sensing has led to the glasses. Currently, they’re being tested for medical applications, helping doctors sense anemia, anger, and fear, but if the glasses are adapted for “real-world” use, such as in decrypting a poker player’s blank stare, it seems to me that the filters could be added to our camera setups or software systems to help automate this sort of emotion detection.
Really, it would be one more weapon in the arsenal of the data war we’re trying to fight. Just as Earth and ocean scientists have made leaps in understanding from being able to use satellites to sample the whole Earth virtually every day instead of taking ship-based or buoy-based measurements far apart in space and time, so do we hope to make leaps and bounds in understanding how visitors learn. If we can get our technology to automate data collection and vastly improve the spatial and temporal resolution of our data, hopefully we’ll move into our own satellite era.
Thanks to GOOD magazine and PSFK for the tips.
We heard recently that our developer contractors have decided they have to abandon their efforts to make the first facial recognition system they investigated work. It was a tough call; they had put a lot of effort into it, thinking many times if they could just tweak this and alter that, they would get better performance than 60%. Alas, they finally decided it was not going to happen, at least without a ridiculous amount of further effort for the eventual reward. So, they are taking a different tack, starting over, almost, though they have lots of lessons learned from the first go-round.
I think this indecision about when it makes sense to try and fix the leaking ship vs. abandon ship and find another is a great parallel with exhibit development. Sometimes, you have a great idea that you try with visitors, and it flops. You get some good data, though, and see a way you can try it again. You make your changes. It flops again, though maybe not quite as spectacularly. Just enough better to give you hope. And so on … until you have to decide to cut bait and either redesign something for that task entirely or, if you’re working with a larger exhibition, find another piece to satisfy whatever learning or other goals you had in mind for the failed piece.
In either situation, it’s pretty heartbreaking to let go of all that investment. When I first started working in prototyping, this happened to our team designing the Making Models exhibition at the Museum of Science, Boston. As an intern, I hadn’t invested anything in the failed prototype, but I could see the struggle in the rest of the team, and it made such an impression that I recall it all these years later. Ultimately, the final exhibit looks rather different from what I remember, but its success is also a testament to the power of letting go. Hopefully, we’ll eventually experience that success with our facial recognition setups!