I have just about nailed down a defense date. That means I have about two months to wrap all this up (or warp it, as I originally typed) into a coherent, cohesive, narrative worthy of a doctoral degree. It’s amazing to me to think it might actually be done one of these days.

Of course, in research, there’s always more you can analyze about your data, so in reality, I have to make some choices about what goes in the dissertation and what has to remain for later analysis. For example, I “threw in” some plain world images into the eye-tracking as potential controls just to see how people might look at a world map without any data on it. Not that there really is such a thing; technically any image has some sort of data on it, as it is always representing something, even this one:

 

 

Here, the continents are darker grey than the ocean, so it’s a representation of the Earth’s current land and ocean distinctions.

I also included two “blue marble” images that are essentially images of Earth as if seen from space, without clouds and all in daylight simultaneously, one with the typical northern hemisphere “north-up” orientation, the other “south-up” as the world is often portrayed in Australia, for one. However, I probably don’t have time to analyze all of that right now, at least not and complete the dissertation on schedule. The best dissertation is a done dissertation, not one that is perfect, or answers every single question! If it did, what would the rest of my career be for?

So a big part of the research process is making tradeoffs between how much data to collect so that you do get enough to anticipate any problems you might incur and want to examine about your data, but not so much that you lose sight of your original, specific research questions and get mired in analysis forever. Thinking about what does and doesn’t fit in the particular framework I’ve laid out for analysis, too, is part of this. That means making smart choices about how to sufficiently answer your questions with the data you have and address major potential problems but letting go and letting some questions remain unanswered. At least for the moment. That’s a major task in front of me right now, with both my interview data and my eye-tracking data. At least I’ve finished collecting data for the dissertation. I think.

Let the countdown to defense begin …

Awhile ago, I promised to share some of my experiences in collecting data on visitors’ exhibit use as part of this blog. Now that I’ve actually been back at it for the past few weeks, I thought it might be time to actually share what I’ve found. As it is winter here in the northern hemisphere, our weekend visitation to the Hatfield Visitor Center is generally pretty low. This means I have to time my data collection carefully if I don’t want to spend an entire day waiting for subjects and maybe only collect data on two people. That’s what happened on a Sunday last month; the weather on the coast was lovely, and visitation was minimal. I have been recently collecting data in our Rhythms of the Coastal Waters exhibit, which has additional data collection challenges in that it is basically the last thing people might see before they leave the center, it’s dim because it houses the projector-based Magic Planet, and there are no animals, unlike just about every other corner of the Visitor Center. So, I knocked off early and went to the beach. Then I definitely rescheduled another day I was going to collect data because it was a sunny weekend day at the coast.

On the other hand, on a recent Saturday we hosted our annual Fossil Fest. While visitation was down from previous years, only about 650 compared to 900, this was plenty for me, and I was able to collect data on 13 people between 11:30 and 3:30, despite an octopus feeding and a lecture by our special guest fossil expert. Considering data collection, including recruitment, consent, the experiment, and debrief probably runs 15 minutes, I thought that this was a big win. In addition, I only got one refusal from a group that said they were on their way out and didn’t have time. It’s amazing how much better things go if you a) lead with “I’m a student doing research,” b) mention “it will only take about 5-10 minutes”, and c) don’t record any video of them. I suspect it also helps that it’s not summer, as this crowd is more local and thus perhaps more invested in improving the center, whereas summer tourists might be visiting more for the experience, to say they’ve been there, as John Falk’s museum visitor “identity” or motivation research would suggest. This would seem to me like a motivation that would not make you all that eager to participate. Hm, sounds like a good research project to me!

Another reason I suspect things went well was that I am generally approaching only all-adult groups, and I only need one participant from each group, so someone can watch the kids if they get bored. I did have one grandma get interrupted a couple times, though, by her grandkids, but she was a trooper and shooed them away while she finished. When I was recording video and doing interviews about the Magic Planet, the younger kids in the group often got bored, which made recruiting families and getting good data somewhat difficult, though I didn’t have anyone quit early once they agreed to participate. Also, as opposed to prototyping our salmon forecasting exhibit, I wasn’t asking people to sit down at a computer and take a survey, which seemed to feel more like a test to some people. Or it could have been the exciting new technology I was using, the eye-tracker, that was appealing to some.

Interestingly, I also had a lot of folks observe their partners as the experiment happened, rather than wander off and meet up later, which happened more with the salmon exhibit prototyping, perhaps because there was not much to see if one person was using the exhibit. With the eye-tracking and the Magic Planet, it was still possible to view the images on the globe because it is such a large exhibit. Will we ever solve the mystery of what makes the perfect day for data collection? Probably not, but it does present a good opportunity for reflection on what did and didn’t seem to work to get the best sample of your visitorship. The cameras we’re installing are of course intended to shed some light on how representative these samples are.

What other influences have you seen that affect whether you have a successful or slow day collecting exhibit use data?

 

With all the new wave exhibit work, visitor center maintenance, server changes and audio testing that has been going on in the last few months, Mark, Katie and I realized that the Milestone system that runs the cameras and stores the video data is in need of a little TLC.

Next week we will be relabeling cameras, tidying up the camera “views” (customized display of the different camera views), and checking the servers. We’ve also been having a few problems with exporting video using a codec that allows the video to be played on other media players outside the Milestone client, so we’re going to attempt to solve that issue too. Basically we have a bit of camera housekeeping to attend to – but a good tidy up and reorganize is always a positive way to start the new year me thinks!

Before the holidays, Mark had also asked me to try out the newly released Axis network covert camera – which although video only, is much smaller and discreet than our dome counterparts, and may be more useful for establishment angles, i.e. camera views that establish a wider view of an area (such as a birds eye view), and don’t necessarily require audio. With the updated wave tanks going in, I temporarily installed one on one of the wave kiosks to test view and video quality. During the camera housekeeping, I’m going to take a closer look at its performance to determine whether we will obtain and install more. They may end up replacing some of the dome cameras so we can free those up for views that require closer angles and more detailed views/audio.

Source: axis.com via Free-Choice on Pinterest

 

Happy new year everyone!

After all the fun and frivolities of the holiday season, I am left with not only the feeling that I probably shouldn’t have munched all those cookies and candies, but also the grave realization that crunch time for my dissertation has commenced. I’d like to have it completed by Spring and, just like Katie, I’ve hit the analysis phase of my research and am desperately trying not to fall into the pit of never-ending data. All those current and former graduate students out there, I’m sure you can relate to this – all those wonderful hours, weeks and months I have to look forward to of frantically trying to make sense of the vast pool of data I have spent the last year planning for and collecting.

 

But fear not! ’tis qualitative data sir! And seeing as I have really enjoyed working with my participants and collecting data so far, I am going to attempt to enjoy discovering the outcomes of all my hard work. To me, the beauty of working with qualitative data is developing the pictures of the answers to the questions that initiated the research in the first place. It’s a jigsaw puzzle with only knowing a rough idea of what the image might look like at the end – you slowly keep adding the pieces until that image comes clear. I’m looking forward to seeing that image.

So what do I have to analyze? Well, namely ~20 interviews with docents, ~75 docent observations, ~100 visitor surveys and 2 focus groups (which will hopefully take place in the next couple of weeks).  I will be using the  research analysis tool, Nvivo, which will aid me in cross-analyzing the different forms of data using a thematic coding approach – analyzing for reoccuring themes within each data set. What I’m particularly psyched about is getting into the video analysis of the participant observations, whereby I’m finally going to get the chance to unpack some of that docent practice I’ve been harping on about for the last two years. Here, I’ll be taking a little multimodal discourse analysis and a little activity theory to break down docent-visitor interaction and interpretative strategies observed.

Right now, the enthusiasm is high! Let’s see how long I can keep it up 🙂 It’s Kilimanjaro, but there’s no turning back now.

 

We’ve recently been prototyping a new exhibit with standard on-the-ground methods, and now we’re going to use the cameras to do a sort of reverse ground-truthing. Over our busy Whale Watch Week between Christmas and New Year’s, Laura set up a camera on the exhibit to collect data on people using the exhibit at times when we didn’t have an observer in place. So in this case, instead of ground-truthing the cameras, we’re sort of doing the opposite, and checking what we found with the in-person observer.

However, the camera will be on at the same time that the researcher is there, too. It almost sounds like we’ll be spying on our researcher and “checking up,” but it will be an interesting check of both our earlier observations without the camera in place, as well as a chance to observe a) people using the new exhibit without a researcher in place, b) people using it *with* a researcher observing them (and maybe noticing the observer, or possibly not), and c) whether people behave differently as well as how much we can capture with a different camera angle than the on-the-ground observer will have.

Some expectations:

The camera should have the advantage of replay which the in-person observer won’t, so we can get an idea of how much might be missed, especially detail-wise.

The camera audio might be better than a researcher standing a ways away, but as our earlier blog posts have mentioned, the audio testing is very much a work in progress.

The camera angle, especially since it’s a single, fixed camera at this point, will be worse than the flexible researcher-in-place, as it will be at a higher angle, and the visitors may block what they’re doing a good portion of the time.

 

As we go forward and check the automated collection of our system with in-place observers, rather than the other way around, these are the sorts of things we’ll be checking for, advantages and disadvantages.

What else do you all expect the camera might provide better or worse than a in-person researcher?

 

Last week, Dr. Rowe and I visited Portland Art Museum to help assist with a recruitment push for participants in their Conversations About Art evaluation and I noticed all of the education staff involved have very different styles of how they recruited visitors to participate in the project. Styles ranged from the apologetic (e.g. “do you mind if I interrupt you to help us”), to incentive-focused (e.g. “get free tickets!) to experiential (e.g. “participating will be fun and informative!”)

This got me thinking a lot about  the significance of people skills and a researcher’s recruitment style in educational studies this week. How does the style in which you get participants involved influence a) how many participants you actually recruit, and b) the quality of the participation (i.e. do they just go through the motions to get the freebie incentive?) Thinking back to prior studies of FCL alum here from OSU, I realized that nearly all the researchers I knew had a different approach to recruitment, be it in person, on the phone or via email, and that in fact it is a learned skill that we don’t often talk too much about.

I’ve been grateful for my success at recruiting both docents and visitors for my research on docent-visitor interactions, which is mostly the result of taking the “help a graduate student complete their research” approach – one that I borrowed from interacting with prior Marine Resource Management colleagues of mine, Abby Nickels and Alicia Christensen during their masters research on marine education activities. Such an approach won’t be much help in the future once I finally get out of grad school, so the question to consider is what factors make for successful participant recruitment? It seems the common denominator is people skills, and by people skills I mean the ability to engage a potential recruit on a level that removes skepticism around being commandeered off the street.  You have to be not only trustworthy, but also approachable. I’ve definitely noticed with my own work that on off days where I’m tired and have trouble maintaining a smiley face for long periods of time at the HMSC entrance, recruitment seems harder. All those younger years spent in customer service jobs and learning how to deal with the public in general seem so much more worthwhile!

So fellow researchers and evaluators, my question for you is what are your strategies for recruiting participants? Do you agree people skills are an important underlying factor? Do you over/under estimate your own personal influence on participant recruitment?