Last week, I talked about our eye-tracking in the science center at the Museums and the Web 2013 conference, as part of a track on Evaluating the Museum. This was the first time I’d attended this conference, and it turned out to be very different from others I’d attended. This, I think, meant that eye-tracking was a little ahead of where the audience of the conference was in some ways and behind in others!

Many of the attendees seemed to be from the art museum world, which has some different and some similar issues to those of science centers – we each have our generally separate professional organizations (American Association of Museums) and (Association of Science and Technology Centers). In fact, the opening plenary speaker, Larry Fitzgerald, made the point that museums should be thinking of ways that they can distinguish themselves from formal schools. He suggested that a lot of the ways museums are currently trying to get visitors to “think” look very much like they ways people think in schools, rather than the ways people think “all the time.” He mentioned “discovery centers” (which I took to mean interactive science centers), as places that are already trying to leverage the ways people naturally think (hmm, free-choice learning much?).

The twitter reaction and tone of other presentations made me think that this was actually a relatively revolutionary idea for a lot of folks there. My sense is that probably that stems from a different institutional culture that prevents much of that, except for places like Santa Cruz Museum of Art and History, where Nina Simon is re-vamping the place around participation of community members.

So, overall, eye-tracking and studying what our visitors do was also a fairly foreign concept; one tweet wondered whether a museum’s mission needed to be visitor-centric. Maybe museums that don’t have to rely on ticket sales can rest on that, but the conference was trying to push a bit that museums are changing, away from places where people come to find the answer, or the truth and instead to be places of participation. That means some museums may also be generally lagging the idea of getting funding to study visitors at all, let alone spending large amounts on “capital” equipment, and since eye-trackers are expensive technologies designed basically only for that purpose, it seemed just a little ahead of where some of the conference participants were. I’ll have to check back in a few years and see h0w things are changing. As we talked about in our lab meeting this morning, a lot of diversity work in STEM free-choice learning is happening not in academia, but in (science) museums. Maybe that will change in a few years, as well, as OSU continues to shape its Science and Mathematics Education faculty and graduate programs.

We’ve recently been prototyping a new exhibit with standard on-the-ground methods, and now we’re going to use the cameras to do a sort of reverse ground-truthing. Over our busy Whale Watch Week between Christmas and New Year’s, Laura set up a camera on the exhibit to collect data on people using the exhibit at times when we didn’t have an observer in place. So in this case, instead of ground-truthing the cameras, we’re sort of doing the opposite, and checking what we found with the in-person observer.

However, the camera will be on at the same time that the researcher is there, too. It almost sounds like we’ll be spying on our researcher and “checking up,” but it will be an interesting check of both our earlier observations without the camera in place, as well as a chance to observe a) people using the new exhibit without a researcher in place, b) people using it *with* a researcher observing them (and maybe noticing the observer, or possibly not), and c) whether people behave differently as well as how much we can capture with a different camera angle than the on-the-ground observer will have.

Some expectations:

The camera should have the advantage of replay which the in-person observer won’t, so we can get an idea of how much might be missed, especially detail-wise.

The camera audio might be better than a researcher standing a ways away, but as our earlier blog posts have mentioned, the audio testing is very much a work in progress.

The camera angle, especially since it’s a single, fixed camera at this point, will be worse than the flexible researcher-in-place, as it will be at a higher angle, and the visitors may block what they’re doing a good portion of the time.

 

As we go forward and check the automated collection of our system with in-place observers, rather than the other way around, these are the sorts of things we’ll be checking for, advantages and disadvantages.

What else do you all expect the camera might provide better or worse than a in-person researcher?

 

Last week, Dr. Rowe and I visited Portland Art Museum to help assist with a recruitment push for participants in their Conversations About Art evaluation and I noticed all of the education staff involved have very different styles of how they recruited visitors to participate in the project. Styles ranged from the apologetic (e.g. “do you mind if I interrupt you to help us”), to incentive-focused (e.g. “get free tickets!) to experiential (e.g. “participating will be fun and informative!”)

This got me thinking a lot about  the significance of people skills and a researcher’s recruitment style in educational studies this week. How does the style in which you get participants involved influence a) how many participants you actually recruit, and b) the quality of the participation (i.e. do they just go through the motions to get the freebie incentive?) Thinking back to prior studies of FCL alum here from OSU, I realized that nearly all the researchers I knew had a different approach to recruitment, be it in person, on the phone or via email, and that in fact it is a learned skill that we don’t often talk too much about.

I’ve been grateful for my success at recruiting both docents and visitors for my research on docent-visitor interactions, which is mostly the result of taking the “help a graduate student complete their research” approach – one that I borrowed from interacting with prior Marine Resource Management colleagues of mine, Abby Nickels and Alicia Christensen during their masters research on marine education activities. Such an approach won’t be much help in the future once I finally get out of grad school, so the question to consider is what factors make for successful participant recruitment? It seems the common denominator is people skills, and by people skills I mean the ability to engage a potential recruit on a level that removes skepticism around being commandeered off the street.  You have to be not only trustworthy, but also approachable. I’ve definitely noticed with my own work that on off days where I’m tired and have trouble maintaining a smiley face for long periods of time at the HMSC entrance, recruitment seems harder. All those younger years spent in customer service jobs and learning how to deal with the public in general seem so much more worthwhile!

So fellow researchers and evaluators, my question for you is what are your strategies for recruiting participants? Do you agree people skills are an important underlying factor? Do you over/under estimate your own personal influence on participant recruitment?

 

 

 

I want to talk today about what many of us here have alluded to in other posts: the approval (and beyond) process of conducting ethical human research. What grew out of really really unethical primarily medical research on humans many years ago now has evolved into something that can take up a great deal of your research time, especially on a large, long-duration grant such as ours. Many people (including me, until recently) thought of this process as primarily something to be done up-front: get approval, then sort of forgotten about except for the actual gaining of consent as you go and unless you significantly change your research questions or process. Wrong! It’s a much more constant, living thing.

We at the Visitor Center have several things that make us a weird case for our Institutional Review Board office at the university. First, even though it is generally educational research that we do, as part of the Science and Mathematics Education program, our research sites (the Visitor Center and other community-based locations) are not typically “approved educational research settings” such as classrooms. Classrooms have been so frequently used over the years that they have a more streamlined approval process unless you’re introducing a radically different type of experiment. Second, we’re a place where we have several types of visitor populations: the general public, OSU student groups, and K-12 school and camp groups, who each have different levels of privacy expectations, requirements for attending (public: none, OSU school groups: may be part of a grade), and thus different levels and forms of obtaining consent to do research required. Plus, we’re trying to video record our entire population, so getting signatures from 150,000+ visitors per year just isn’t feasible. However, some of the research we’re doing will be our typical video recording that is more in-depth than just the anonymized overall timing and tracking and visitor recognition from exhibit to exhibit.

What this means is a whole stack of IRB protocols that someone has to manage. At current count, I am managing four: one for my thesis, one for eyetracking in the Visitor Center for looking at posters and such, one for a side project involving concept mapping, and one for the general overarching video recording for the VC. The first three have been approved and the last one is in the middle of several rounds of negotiation on signage, etc., as I’ve mentioned before. Next up we need to write a protocol for the wave tank video reflections, and one for groundtruthing the video-recording-to-automatic-timing-tracking-and-face-recognition data collection. In the meantime, the concept mapping protocol has been open for a year and needs to be closed. My thesis protocol has bee approved nearly as long, went through several deviations in which I did things out of order or without getting updated approval from IRB, and now itself soon needs to be renewed. Plus, we already have revisions to the video recording protocol staff once the original approval happens. Thank goodness the eyetracking protocol is already in place and in a sweet spot time-wise (not needing renewal very soon), as we have to collect some data around eyetracking and our Magic Planet for an upcoming conference, though I did have to check it thoroughly to make sure what we want to do in this case falls under what’s been approved.

On the positive side, though, we have a fabulous IRB office that is willing to work with us as we break new ground in visitor research. Among them, us, and the OSU legal team we are crafting a strategy that we hope will be useful to other informal learning institutions as they proceed with their own research. Without their cooperation, though, very little of our grand plan would be able to be realized. Funders are starting to realize this, too, and before they make a final award for a grant they require proof that you’ve discussed the basics of your project at least with your IRB office and they’re on board.

As the lab considers how to encourage STEM reflection around the tsunami tank, this recent post from Nina Simon at Museum 2.0 reminds us what a difference the choice of a single word can make in visitor reflection:

“While the lists look the same on the surface (and bear in mind that the one on the left has been on display for 3 weeks longer than the one on the right), the content is subtly different. Both these lists are interesting, but the “we” list invites spectators into the experience a bit more than the “I” list.”

So as we go forward, the choice not only of the physical booth set up (i.e. allowing privacy or open to spectators), but also the specific wording can influence how our visitors choose to focus or not on the task we’re trying to investigate, and how broad or specific/personal their reflections might be. Hopefully, we’ll be able to do some testing of several supposedly equivalent prompts as Simon suggests in an earlier post as well as more “traditional” iterative prototyping.

Do visitors use STEM reasoning when describing their work in a build-and-test exhibit? This is one of the first research questions we’re investigating as part of the Cyberlab grant, besides whether or not we can make this technology integration work. As with many other parts of this grant, we’re designing the exhibit around the ability to ask and answer this question, so Laura and I are working on designing a video reflection booth for visitors to tell us about what happened to the structures they build and knock down in the tsunami tank. Using footage from the overhead camera, visitors will be able to review what happens, and hopefully tell us about why they created what they did, whether or not they expected it to survive or fail, and how the actual result fit or didn’t match what they hoped for.

We have a couple of video review and share your thoughts examples we drew from; The Utah Museum of Natural History has an earthquake shake table where you build and test a structure and then can review footage of it going through the simulated quake. The California Science Center’s traveling exhibit Goosebumps: the Science of Fear also allows visitors to view video of expressions of fear from themselves and other visitors filmed while they are “falling”. However, we want to take these a step farther and add the visitor reflection piece, and then allow visitors to choose to share their reflections with other visitors as well.

As often happens, we find ourselves with a lot of creative ways to implement this, and ideas for layer upon layer of interactivity that may ultimately complicate things, so we have to rein our ideas in a bit to start with a (relatively) simple interaction to see if the opportunity to reflect is fundamentally appealing to visitors. Especially when one of our options is around $12K – no need to go spending money without some basic questions answered. Will visitors be too shy to record anything, too unclear about the instructions to record anything meaningful, or just interested in mooning/flipping off/making silly faces at the camera? Will they be too protective of their thoughts to share them with researchers? Will they remain at the build-and-test part forever and be uninterested in even viewing the replay of what happened to their structures? Avoiding getting ahead of ourselves and designing something fancy before we’ve answered these basic questions is what makes prototyping so valuable. So our original design will need some testing with probably a simple camera setup and some mockups of how the program will work for visitors to give us feedback before we go any farther with the guts of the software design. And then eventually, we might have an exhibit that allows us to investigate our ultimate research question.