I want to talk today about what many of us here have alluded to in other posts: the approval (and beyond) process of conducting ethical human research. What grew out of really really unethical primarily medical research on humans many years ago now has evolved into something that can take up a great deal of your research time, especially on a large, long-duration grant such as ours. Many people (including me, until recently) thought of this process as primarily something to be done up-front: get approval, then sort of forgotten about except for the actual gaining of consent as you go and unless you significantly change your research questions or process. Wrong! It’s a much more constant, living thing.

We at the Visitor Center have several things that make us a weird case for our Institutional Review Board office at the university. First, even though it is generally educational research that we do, as part of the Science and Mathematics Education program, our research sites (the Visitor Center and other community-based locations) are not typically “approved educational research settings” such as classrooms. Classrooms have been so frequently used over the years that they have a more streamlined approval process unless you’re introducing a radically different type of experiment. Second, we’re a place where we have several types of visitor populations: the general public, OSU student groups, and K-12 school and camp groups, who each have different levels of privacy expectations, requirements for attending (public: none, OSU school groups: may be part of a grade), and thus different levels and forms of obtaining consent to do research required. Plus, we’re trying to video record our entire population, so getting signatures from 150,000+ visitors per year just isn’t feasible. However, some of the research we’re doing will be our typical video recording that is more in-depth than just the anonymized overall timing and tracking and visitor recognition from exhibit to exhibit.

What this means is a whole stack of IRB protocols that someone has to manage. At current count, I am managing four: one for my thesis, one for eyetracking in the Visitor Center for looking at posters and such, one for a side project involving concept mapping, and one for the general overarching video recording for the VC. The first three have been approved and the last one is in the middle of several rounds of negotiation on signage, etc., as I’ve mentioned before. Next up we need to write a protocol for the wave tank video reflections, and one for groundtruthing the video-recording-to-automatic-timing-tracking-and-face-recognition data collection. In the meantime, the concept mapping protocol has been open for a year and needs to be closed. My thesis protocol has bee approved nearly as long, went through several deviations in which I did things out of order or without getting updated approval from IRB, and now itself soon needs to be renewed. Plus, we already have revisions to the video recording protocol staff once the original approval happens. Thank goodness the eyetracking protocol is already in place and in a sweet spot time-wise (not needing renewal very soon), as we have to collect some data around eyetracking and our Magic Planet for an upcoming conference, though I did have to check it thoroughly to make sure what we want to do in this case falls under what’s been approved.

On the positive side, though, we have a fabulous IRB office that is willing to work with us as we break new ground in visitor research. Among them, us, and the OSU legal team we are crafting a strategy that we hope will be useful to other informal learning institutions as they proceed with their own research. Without their cooperation, though, very little of our grand plan would be able to be realized. Funders are starting to realize this, too, and before they make a final award for a grant they require proof that you’ve discussed the basics of your project at least with your IRB office and they’re on board.

As the lab considers how to encourage STEM reflection around the tsunami tank, this recent post from Nina Simon at Museum 2.0 reminds us what a difference the choice of a single word can make in visitor reflection:

“While the lists look the same on the surface (and bear in mind that the one on the left has been on display for 3 weeks longer than the one on the right), the content is subtly different. Both these lists are interesting, but the “we” list invites spectators into the experience a bit more than the “I” list.”

So as we go forward, the choice not only of the physical booth set up (i.e. allowing privacy or open to spectators), but also the specific wording can influence how our visitors choose to focus or not on the task we’re trying to investigate, and how broad or specific/personal their reflections might be. Hopefully, we’ll be able to do some testing of several supposedly equivalent prompts as Simon suggests in an earlier post as well as more “traditional” iterative prototyping.

And I don’t just mean Thanksgiving! Lately, I’ve run across an exhibit, a discussion, and now an article on things wearing down and breaking, so I figured that meant it was time for a blog post.

It started with my visit to the Exploratorium, who find that stuff breaks, sometimes unexpectedly. Master tinkerers and builders that they are, they made it into an exhibit of worn, bent or flat-out broken parts of their exhibits. It may take hundreds or even hundreds of thousands of uses, but when your visitorship is near a million per year, it doesn’t take that many days to find micro-changes suddenly visible as macro changes.

 

Then Laura suggested that we keep track of all the equipment we’ve been buying in case of, you guessed it, breaking (or other loss). So we’ve started an inventory that not only will serve as a nice record for the project of all the bits and bobs we’ve had to buy (so far, over 300 feet of speaker wire for just 10 cameras), but also will help us replace them more easily should something go wrong. Which we know it will, eventually, and frankly, we’ll have a sense of how quickly it goes wrong if we keep our records well. In our water-laden touch pools and wave tanks environment, this very likely will be sooner than we hope.

Finally, John Baek’s Open and Online Lifelong Learning newspaper linked to this story from Wired magazine about the people who are deliberately trying to break things, to make the unexpected expected.

So, have a great Thanksgiving break (in the U.S.), and try not to break anything in the process.

Well the data collection for my research has been underway for nearly 2 months now, how time flies! For those of you new to this project, my research centers on documenting the practice of science center docents as they interact with visitors. Data collection includes video observations of voluntary docents at HMSC using “visitor-mounted” looxcie cameras, as well as pre- and post-observation interviews with those participating docents.

“Visitor-eye view using the looxcies”

My current focus is getting the video observations of  each of the 10 participating docents collected. In order to conduct a post observation interview (which asks docents to reflect on their practice), I need to get about 10-15 minutes of video data of each of the docents interacting with the public. This doesn’t sound like much, but when you can’t guarantee a recruited family will interact with a recruited docent,  and an actual interaction will likely only last from 30 seconds to a few minutes, it takes a fair few families wearing cameras to get what you need. However, I’m finding this process really enjoyable both in getting to know the docents and meeting visitors.

When I first started this project I was worried that visitors would be a little repelled about the idea of having their whole visit recorded. What I’m actually finding is that either a) they want to help the poor grad student complete her thesis, b) they think the cameras are fun and “want a go” or c) they totally want one of the HMSC tote bags being used as an incentive (what can I say, everyone loves free stuff right?!) The enthusiasm for the cameras has gone as far as one gentleman running up to a docent, jumping up and down and shouting “I’m wearing a camera, I’m wearing a camera!” Additionally, and for those star trek fans out there, a number of visitors and colleagues alike have remarked how much wearing a looxcie makes a person look like a borg (i.e. cyborg), particularly with that red light thing…

Now how, may you ask, does that not influence those lovely naturalistic interactions you’re supposed to be observing? Well, as many of us qualitative researchers know, that unless you hide the fact you are observing a person (an element our IRB process is not particularly fond of) you can never truly remove that influence, but you can assume that if particular practices are observed often enough, they are part of the landscape you are observing. The influence of the cameras may alter how naturalistic that interaction may be, but that interaction is still a reflection of social behaviors taking place. People do not completely change their personality and ways of life simply because a camera is around; more likely any behavior changes may simply be over- or under-exaggerated normative actions. And I am finding patterns, lots of patterns, in the discourse and action taking place between docents and visitors.

However, I am paying attention to how visitors and docents react to the cameras. When filtering the footage for interactions, I look out for any discourse that indicates camera influence is an issue. As examples, the docent in the “jumping man” footage reacts surprised to the man’s sudden shouting, open’s his eyes wide and nervously laughs – to which I noted on the video that the interaction from then on may irregular. In one clip I have a docent talking non-stop about waves seemingly without taking a breath for nearly 8 minutes – to which I noted seemed unnatural in comparison to their other shorter dialogue events. Another clip has a docent bursting out laughing at a visitor wearing one of the looxices attached to his baseball cap using a special clip I have (not something I expected!) – to which I noted would have likely made the ability for the visitor to forget about the looxcie less possible.

All in all, however, most visitors remark they actually forget they are wearing the camera as they visit goes on, simply because they are distracted by their actual visit. This makes me happy, as the purpose of incorporating the looxcies was to reduce the influence of being videod as a whole. Visitors forget to a point where, during pilots, one man actually walked into the bathroom wearing his looxcie, and recorded some footage I wasn’t exactly intending to observe… suffice to say, I instantly deleted that video and and updated my recruitment spiel to include a reminder not to take the cameras in to the bathroom. Social science never ceases to surprise me!

Despite our fancy technology, there are some pieces of data we have to gather the old-fashioned way: by asking visitors. One piece we’d like to know is why visitors chose to visit on this particular occasion. We’re building off of John Falk’s museum visitor motivation and identity work, which began with a survey that asks visitors to rate a series of statements on Likert (1-5) scales as to how applicable they are for them that day, and reveals a rather small set of motives driving the majority of visits. We also have used this framework in a study of three of our local informal science education venues, finding that an abbreviated version works equally well to determine which (if any) of these motivations drives visitors. The latest version, tried at the Indianapolis Museum of Art, uses photos along with the abbreviated number of statements for the visitors to identify their visit motivations.

We’re implementing a version on an iPad kiosk in the VC for a couple of reasons: first, we genuinely want to know why folks are visiting, and want to be able to correlate identity motivations with the automated behavior, timing, and tracking data we collect from the cameras. Second, we hope people will stop long enough for us to get a good reference photo for the facial recognition system. Sneaky, perhaps, but it’s not the only place we’re trying to position cameras for good reference shots. And if all goes well with our signage, visitors will be more aware than ever that we’re doing research, and that it is ultimately aimed at improving their experience. Hopefully that awareness will allay most of the final fears about the embedded research tools that we are hoping will be minimal to start with.

How do we get signs in front of visitors so they will actually read them? Think about how many signs at the front door of your favorite establishment you walk past without reading. How many street signs, billboards, and on-vehicle ads pass through our vision barely a blur? While exhibit designers spend many an hour toiling away to create the perfect signs to offer visitors some background and possible ways to interact with objects, many visitors gloss right over them, preferring to just start interacting or looking in their own way. This may be a fine alternative use for most cases, but in the case of our video research and the associated informed consent that our subjects need to offer, signs at the front door are going to be our best bet to inform visitors but not unduly interrupt their experience, or make museum entry and additional unreasonable burden for visitors or staff. Plus, the video recording is not optional at this point for folks who visit; you can visit and be recorded, or you can’t visit.

Thankfully, we have the benefit of the Exploratorium and other museums who have done video research in certain exhibits and have tested signs at their entrances and the percentage of visitors who subsequently know they’re being recorded for research. Two studies by the Exploratorium staff showed that their signs at entrances to specifically cordoned-off areas stating that videotaping for research was in progress were effective at informing 99% of visitors to the exhibit areas that a) videotaping was happening and b) it was for research. One interesting point is that their testing of the signs themselves and the language on them revealed that the camera icon needed to be rather old-school/highly professional looking to distinguish itself from the average visitor making home movies while visiting a museum and be clearly associated with official research purposes.


Source: store.sony.com via Free-Choice on Pinterest

Never mind the cameras we’re actually using look more like surveillance cameras.

 

So our strategy, crafted with our Institutional Review Board, is several-fold. Signs at the front entrance (and the back entrance, for staff and volunteers, and other HMSC visitors who might be touring the entire research facility for other reasons and popping in to the VC) will feature the large research camera and a few, hopefully succinct and clear words about the reasons we’re doing research, and where to get more information. We also have smaller signs on some of the cameras themselves with a short blurb about the fact that it’s there for research purposes. Next, we’re making handouts for people that will explain in more detail what our research is about and how the videos help us with that work. We’ll also put that information on our web site, and add the address of the video research information to our rack cards and other promotional material we send around town and Oregon. Of course, our staff and volunteers are also being included in the process so they are well-equipped to answer visitor questions.

Then there’s the thorny issue of students. University students who are over 18 who are visiting as part of a required class will have to individually consent due to federal FERPA regulations. We’re working with the IRB to make this as seamless a process as possible. We’ll be contacting local school superintendents to let them know about the research and let them inform parents of any class that will be attending on a field trip. These students on class field trips will be assumed to have parental consent by virtue of having signed school permission slips to attend Hatfield.

Hopefully this will all work. The Exploratorium’s work showed that even most people who didn’t realize they were being recorded were not bothered much by the recording, and even fewer would have avoided the area if they’d actually known before hand. As always, though, it will be a work-in-progress as we get visitor and volunteer feedback and move forward with the research.

Gutwill, J. (2003). “Gaining visitor consent for research II: Improving the posted-sign method.” Curator
46(2): 228-235

Gutwill, J. (2002). “Gaining visitor consent for research: Testing the posted-sign method.” Curator 45(3): 232-238.