A new partnership with the Philomath (pronounced fill-OH-muth for you out-of-town readers) High Robotics Engineering Division (PHRED) helped the HMSC Free-Choice learning lab overcome a major information design hurdle. An on-going challenge for our observation system is recording usage of small, non-electronic moveable exhibit components – think bones, shells, levers, and spinning wheels.

PHRED mentors Tom Health and Tom Thompson will work with students to develop tiny, wireless microprocessor sensors that can be attached to any physical moving exhibit component and report its use to our database. The team will be using the popular Arduino development tool that has been the technological heart of the Maker movement.

This is a great partnership – the PHRED team has all the skills, enthusiasm, and creativity to tackle the project and build successful tools – not to mention gaining the notoriety that comes from working on an NSF-funded project. Oregon Sea Grant gains more experience integrating after school science clubs into funded research projects, while meeting the ever-challenging objective of engaging underserved communities.
Thanks to Mark for this update. 

In the last couple of weeks Katie and I have been testing some options for capturing better quality visitor conversation for the camera system using external mics.

As Katie mentioned last month, each camera’s built-in microphones are proving to be a little unfruitful in capturing good quality audio for the eventual voice recognition system in “hot-spot” areas such as the touch tanks and front desk. As a result, we purchased some pre-amplified omni-directional microphones and set about testing their placement and audio quality in these areas. This has been no easy process, as the temporary wiring we put in place to hook the mics to the cameras is  not as aesthetically pleasing in a public setting as one might hope, and we discovered that the fake touch tank rocks are duct-tape’s arch enemy. Plus the mics have been put through their paces through various visitor kicks, bumps and water splashes.

As well as the issue of keeping the mics in place, testing has also meant a steep learning curve about mic level adjustment. When we initially wired them up, I adjusted each mic (via a mixer) one by one to reduce “crackly” noises and distortion during loud conversations. However, I later realized the adjustment overlooked necessary camera audio setup changes, and gain adjustments, affecting just how close a visitor has to get to one of the mics to actually hear them, particularly over the constant noise of running water around tanks.

So today I am embarking on a technical adventure. Wearing wireless headphones and brandishing a flathead screwdriver, I am going to reset all the relevant cameras’ audio settings to a zero gain, adjust the mic levels for mic balance (there are multiple mics per camera) rather than crackly noises, and adjust the gain until the sample audio I pull from the camera system comes out cleaner. I’m not expecting to output audio with the clarity of a seastar squeak, but I will attempt to get output that allows us to capture focal areas of clear conversation, even with the quietest of visitors. Avast me hearties, I be a sound buccaneer!

Well the data collection for my research has been underway for nearly 2 months now, how time flies! For those of you new to this project, my research centers on documenting the practice of science center docents as they interact with visitors. Data collection includes video observations of voluntary docents at HMSC using “visitor-mounted” looxcie cameras, as well as pre- and post-observation interviews with those participating docents.

“Visitor-eye view using the looxcies”

My current focus is getting the video observations of  each of the 10 participating docents collected. In order to conduct a post observation interview (which asks docents to reflect on their practice), I need to get about 10-15 minutes of video data of each of the docents interacting with the public. This doesn’t sound like much, but when you can’t guarantee a recruited family will interact with a recruited docent,  and an actual interaction will likely only last from 30 seconds to a few minutes, it takes a fair few families wearing cameras to get what you need. However, I’m finding this process really enjoyable both in getting to know the docents and meeting visitors.

When I first started this project I was worried that visitors would be a little repelled about the idea of having their whole visit recorded. What I’m actually finding is that either a) they want to help the poor grad student complete her thesis, b) they think the cameras are fun and “want a go” or c) they totally want one of the HMSC tote bags being used as an incentive (what can I say, everyone loves free stuff right?!) The enthusiasm for the cameras has gone as far as one gentleman running up to a docent, jumping up and down and shouting “I’m wearing a camera, I’m wearing a camera!” Additionally, and for those star trek fans out there, a number of visitors and colleagues alike have remarked how much wearing a looxcie makes a person look like a borg (i.e. cyborg), particularly with that red light thing…

Now how, may you ask, does that not influence those lovely naturalistic interactions you’re supposed to be observing? Well, as many of us qualitative researchers know, that unless you hide the fact you are observing a person (an element our IRB process is not particularly fond of) you can never truly remove that influence, but you can assume that if particular practices are observed often enough, they are part of the landscape you are observing. The influence of the cameras may alter how naturalistic that interaction may be, but that interaction is still a reflection of social behaviors taking place. People do not completely change their personality and ways of life simply because a camera is around; more likely any behavior changes may simply be over- or under-exaggerated normative actions. And I am finding patterns, lots of patterns, in the discourse and action taking place between docents and visitors.

However, I am paying attention to how visitors and docents react to the cameras. When filtering the footage for interactions, I look out for any discourse that indicates camera influence is an issue. As examples, the docent in the “jumping man” footage reacts surprised to the man’s sudden shouting, open’s his eyes wide and nervously laughs – to which I noted on the video that the interaction from then on may irregular. In one clip I have a docent talking non-stop about waves seemingly without taking a breath for nearly 8 minutes – to which I noted seemed unnatural in comparison to their other shorter dialogue events. Another clip has a docent bursting out laughing at a visitor wearing one of the looxices attached to his baseball cap using a special clip I have (not something I expected!) – to which I noted would have likely made the ability for the visitor to forget about the looxcie less possible.

All in all, however, most visitors remark they actually forget they are wearing the camera as they visit goes on, simply because they are distracted by their actual visit. This makes me happy, as the purpose of incorporating the looxcies was to reduce the influence of being videod as a whole. Visitors forget to a point where, during pilots, one man actually walked into the bathroom wearing his looxcie, and recorded some footage I wasn’t exactly intending to observe… suffice to say, I instantly deleted that video and and updated my recruitment spiel to include a reminder not to take the cameras in to the bathroom. Social science never ceases to surprise me!

Do visitors use STEM reasoning when describing their work in a build-and-test exhibit? This is one of the first research questions we’re investigating as part of the Cyberlab grant, besides whether or not we can make this technology integration work. As with many other parts of this grant, we’re designing the exhibit around the ability to ask and answer this question, so Laura and I are working on designing a video reflection booth for visitors to tell us about what happened to the structures they build and knock down in the tsunami tank. Using footage from the overhead camera, visitors will be able to review what happens, and hopefully tell us about why they created what they did, whether or not they expected it to survive or fail, and how the actual result fit or didn’t match what they hoped for.

We have a couple of video review and share your thoughts examples we drew from; The Utah Museum of Natural History has an earthquake shake table where you build and test a structure and then can review footage of it going through the simulated quake. The California Science Center’s traveling exhibit Goosebumps: the Science of Fear also allows visitors to view video of expressions of fear from themselves and other visitors filmed while they are “falling”. However, we want to take these a step farther and add the visitor reflection piece, and then allow visitors to choose to share their reflections with other visitors as well.

As often happens, we find ourselves with a lot of creative ways to implement this, and ideas for layer upon layer of interactivity that may ultimately complicate things, so we have to rein our ideas in a bit to start with a (relatively) simple interaction to see if the opportunity to reflect is fundamentally appealing to visitors. Especially when one of our options is around $12K – no need to go spending money without some basic questions answered. Will visitors be too shy to record anything, too unclear about the instructions to record anything meaningful, or just interested in mooning/flipping off/making silly faces at the camera? Will they be too protective of their thoughts to share them with researchers? Will they remain at the build-and-test part forever and be uninterested in even viewing the replay of what happened to their structures? Avoiding getting ahead of ourselves and designing something fancy before we’ve answered these basic questions is what makes prototyping so valuable. So our original design will need some testing with probably a simple camera setup and some mockups of how the program will work for visitors to give us feedback before we go any farther with the guts of the software design. And then eventually, we might have an exhibit that allows us to investigate our ultimate research question.

Despite our fancy technology, there are some pieces of data we have to gather the old-fashioned way: by asking visitors. One piece we’d like to know is why visitors chose to visit on this particular occasion. We’re building off of John Falk’s museum visitor motivation and identity work, which began with a survey that asks visitors to rate a series of statements on Likert (1-5) scales as to how applicable they are for them that day, and reveals a rather small set of motives driving the majority of visits. We also have used this framework in a study of three of our local informal science education venues, finding that an abbreviated version works equally well to determine which (if any) of these motivations drives visitors. The latest version, tried at the Indianapolis Museum of Art, uses photos along with the abbreviated number of statements for the visitors to identify their visit motivations.

We’re implementing a version on an iPad kiosk in the VC for a couple of reasons: first, we genuinely want to know why folks are visiting, and want to be able to correlate identity motivations with the automated behavior, timing, and tracking data we collect from the cameras. Second, we hope people will stop long enough for us to get a good reference photo for the facial recognition system. Sneaky, perhaps, but it’s not the only place we’re trying to position cameras for good reference shots. And if all goes well with our signage, visitors will be more aware than ever that we’re doing research, and that it is ultimately aimed at improving their experience. Hopefully that awareness will allay most of the final fears about the embedded research tools that we are hoping will be minimal to start with.

How do we get signs in front of visitors so they will actually read them? Think about how many signs at the front door of your favorite establishment you walk past without reading. How many street signs, billboards, and on-vehicle ads pass through our vision barely a blur? While exhibit designers spend many an hour toiling away to create the perfect signs to offer visitors some background and possible ways to interact with objects, many visitors gloss right over them, preferring to just start interacting or looking in their own way. This may be a fine alternative use for most cases, but in the case of our video research and the associated informed consent that our subjects need to offer, signs at the front door are going to be our best bet to inform visitors but not unduly interrupt their experience, or make museum entry and additional unreasonable burden for visitors or staff. Plus, the video recording is not optional at this point for folks who visit; you can visit and be recorded, or you can’t visit.

Thankfully, we have the benefit of the Exploratorium and other museums who have done video research in certain exhibits and have tested signs at their entrances and the percentage of visitors who subsequently know they’re being recorded for research. Two studies by the Exploratorium staff showed that their signs at entrances to specifically cordoned-off areas stating that videotaping for research was in progress were effective at informing 99% of visitors to the exhibit areas that a) videotaping was happening and b) it was for research. One interesting point is that their testing of the signs themselves and the language on them revealed that the camera icon needed to be rather old-school/highly professional looking to distinguish itself from the average visitor making home movies while visiting a museum and be clearly associated with official research purposes.


Source: store.sony.com via Free-Choice on Pinterest

Never mind the cameras we’re actually using look more like surveillance cameras.

 

So our strategy, crafted with our Institutional Review Board, is several-fold. Signs at the front entrance (and the back entrance, for staff and volunteers, and other HMSC visitors who might be touring the entire research facility for other reasons and popping in to the VC) will feature the large research camera and a few, hopefully succinct and clear words about the reasons we’re doing research, and where to get more information. We also have smaller signs on some of the cameras themselves with a short blurb about the fact that it’s there for research purposes. Next, we’re making handouts for people that will explain in more detail what our research is about and how the videos help us with that work. We’ll also put that information on our web site, and add the address of the video research information to our rack cards and other promotional material we send around town and Oregon. Of course, our staff and volunteers are also being included in the process so they are well-equipped to answer visitor questions.

Then there’s the thorny issue of students. University students who are over 18 who are visiting as part of a required class will have to individually consent due to federal FERPA regulations. We’re working with the IRB to make this as seamless a process as possible. We’ll be contacting local school superintendents to let them know about the research and let them inform parents of any class that will be attending on a field trip. These students on class field trips will be assumed to have parental consent by virtue of having signed school permission slips to attend Hatfield.

Hopefully this will all work. The Exploratorium’s work showed that even most people who didn’t realize they were being recorded were not bothered much by the recording, and even fewer would have avoided the area if they’d actually known before hand. As always, though, it will be a work-in-progress as we get visitor and volunteer feedback and move forward with the research.

Gutwill, J. (2003). “Gaining visitor consent for research II: Improving the posted-sign method.” Curator
46(2): 228-235

Gutwill, J. (2002). “Gaining visitor consent for research: Testing the posted-sign method.” Curator 45(3): 232-238.