Part of my thesis project involves semi-structured phone interviews with COASST citizen science volunteers.  I’m patiently awaiting IRB approval for my project, and in the meantime I’ve completed 4 practice interviews with COASST undergraduate interns.  I ended up using the ZOOM H2 recorder, which has a lead with an earpiece microphone.  It worked great!  If anyone needs to do phone interviews, I recommend this audio recorder.  A friend also told me he used the Olympus digital voice recorder (VN 8100PC) for his interviews, which was sometimes tucked into his shirt pocket around a campfire… and he said he could hear everything perfectly!  Just thought I’d share.

Now that I have 4 transcriptions from my practice interviews, I’m getting more familiar with what the heck I’m supposed to do with my interview data once I actually collect it!  I re-read the book Qualitative Data: An Introduction to Coding and Analysis by Auerbach and Silverstein, and organized the practice transcripts into relevant text, repeating ideas, and themes.  I first did this in a Word document, but it seemed a little clunky.  I learned some people use Excel for this too.  Now I’ve downloaded NVivo and am learning my way around that program.  There’s a little bit of a learning curve for me, but I think I’ll really like it once I get the hang of it.  It’s been fun, and admittedly a little intimidating, to work through the mechanics of coding text for the first time.  Luckily for me, I have some great mentors and am getting great advice.  I’m excited to see what I’m able to make of the interview data, and looking forward to using NVivo for other projects I’m working on too!

I have just about nailed down a defense date. That means I have about two months to wrap all this up (or warp it, as I originally typed) into a coherent, cohesive, narrative worthy of a doctoral degree. It’s amazing to me to think it might actually be done one of these days.

Of course, in research, there’s always more you can analyze about your data, so in reality, I have to make some choices about what goes in the dissertation and what has to remain for later analysis. For example, I “threw in” some plain world images into the eye-tracking as potential controls just to see how people might look at a world map without any data on it. Not that there really is such a thing; technically any image has some sort of data on it, as it is always representing something, even this one:

 

 

Here, the continents are darker grey than the ocean, so it’s a representation of the Earth’s current land and ocean distinctions.

I also included two “blue marble” images that are essentially images of Earth as if seen from space, without clouds and all in daylight simultaneously, one with the typical northern hemisphere “north-up” orientation, the other “south-up” as the world is often portrayed in Australia, for one. However, I probably don’t have time to analyze all of that right now, at least not and complete the dissertation on schedule. The best dissertation is a done dissertation, not one that is perfect, or answers every single question! If it did, what would the rest of my career be for?

So a big part of the research process is making tradeoffs between how much data to collect so that you do get enough to anticipate any problems you might incur and want to examine about your data, but not so much that you lose sight of your original, specific research questions and get mired in analysis forever. Thinking about what does and doesn’t fit in the particular framework I’ve laid out for analysis, too, is part of this. That means making smart choices about how to sufficiently answer your questions with the data you have and address major potential problems but letting go and letting some questions remain unanswered. At least for the moment. That’s a major task in front of me right now, with both my interview data and my eye-tracking data. At least I’ve finished collecting data for the dissertation. I think.

Let the countdown to defense begin …

If you’re a fan of “Project Runway,” you’re no doubt familiar with Tim Gunn’s signature phrase. He employs this particularly around the point in each week’s process, where the designers have chosen their fabrics and made at least their first efforts at turning their design into reality. It’s at about this time in the process where the designers have to forge ahead or take the last chance to start over and re-conceptualize.

 

 

This week, it feels like that’s where we are with the FCL Lab. We’re about one-and-a-half years into our five years of funding, and about a year behind on technology development. Which means, we’ve got the ideas, and the materials, but haven’t really gotten as far along as we’d like in the actual putting it together.

For us, it’s a bigger problem, too; the development (in this case, the video booth as well as the exhibit itself) is holding up the research. As Shawn put it to me, we’re spending too much time and effort trying to design the perfect task instead of “making it work” with what we have. That is, we’re going to re-conceptualize and do the research we can do with what we have in place, while still going forward with the technology development, of course.

So, for the video booth, that means that we’re not going to wait to be able to analyze what people reflect on during the experience, but take the chance to use what we have, namely a bunch of materials, and analyze the interactions that *are* taking place. We’re not going to wait to make the tsunami task perfect to encourage what we want to see in the video booth. Instead, we’re going to invite several different folks with different research lenses to take a look at the video we get at the tank itself and let us know what types of learning they’re seeing. From there, we can refine what data we want to collect.

It’s an important lesson in grant proposal writing, too: Once you’ve been approved, you don’t have to stick word-for-word to your plan. It can be modified, in ways big and small. In fact, it’s probably better that way.

Awhile ago, I promised to share some of my experiences in collecting data on visitors’ exhibit use as part of this blog. Now that I’ve actually been back at it for the past few weeks, I thought it might be time to actually share what I’ve found. As it is winter here in the northern hemisphere, our weekend visitation to the Hatfield Visitor Center is generally pretty low. This means I have to time my data collection carefully if I don’t want to spend an entire day waiting for subjects and maybe only collect data on two people. That’s what happened on a Sunday last month; the weather on the coast was lovely, and visitation was minimal. I have been recently collecting data in our Rhythms of the Coastal Waters exhibit, which has additional data collection challenges in that it is basically the last thing people might see before they leave the center, it’s dim because it houses the projector-based Magic Planet, and there are no animals, unlike just about every other corner of the Visitor Center. So, I knocked off early and went to the beach. Then I definitely rescheduled another day I was going to collect data because it was a sunny weekend day at the coast.

On the other hand, on a recent Saturday we hosted our annual Fossil Fest. While visitation was down from previous years, only about 650 compared to 900, this was plenty for me, and I was able to collect data on 13 people between 11:30 and 3:30, despite an octopus feeding and a lecture by our special guest fossil expert. Considering data collection, including recruitment, consent, the experiment, and debrief probably runs 15 minutes, I thought that this was a big win. In addition, I only got one refusal from a group that said they were on their way out and didn’t have time. It’s amazing how much better things go if you a) lead with “I’m a student doing research,” b) mention “it will only take about 5-10 minutes”, and c) don’t record any video of them. I suspect it also helps that it’s not summer, as this crowd is more local and thus perhaps more invested in improving the center, whereas summer tourists might be visiting more for the experience, to say they’ve been there, as John Falk’s museum visitor “identity” or motivation research would suggest. This would seem to me like a motivation that would not make you all that eager to participate. Hm, sounds like a good research project to me!

Another reason I suspect things went well was that I am generally approaching only all-adult groups, and I only need one participant from each group, so someone can watch the kids if they get bored. I did have one grandma get interrupted a couple times, though, by her grandkids, but she was a trooper and shooed them away while she finished. When I was recording video and doing interviews about the Magic Planet, the younger kids in the group often got bored, which made recruiting families and getting good data somewhat difficult, though I didn’t have anyone quit early once they agreed to participate. Also, as opposed to prototyping our salmon forecasting exhibit, I wasn’t asking people to sit down at a computer and take a survey, which seemed to feel more like a test to some people. Or it could have been the exciting new technology I was using, the eye-tracker, that was appealing to some.

Interestingly, I also had a lot of folks observe their partners as the experiment happened, rather than wander off and meet up later, which happened more with the salmon exhibit prototyping, perhaps because there was not much to see if one person was using the exhibit. With the eye-tracking and the Magic Planet, it was still possible to view the images on the globe because it is such a large exhibit. Will we ever solve the mystery of what makes the perfect day for data collection? Probably not, but it does present a good opportunity for reflection on what did and didn’t seem to work to get the best sample of your visitorship. The cameras we’re installing are of course intended to shed some light on how representative these samples are.

What other influences have you seen that affect whether you have a successful or slow day collecting exhibit use data?

 

Every week we have two lab meetings, and during both we need to use online conferencing software. We’ve been at this for over a year, and in all that time we’ve only managed to find a free software that is marginally acceptable (Google Hangouts). I know that part of our problem is the limited bandwidth on the OSU campus, because when classes are out our problems are fewer, but even with adequate bandwidth we still can’t seem to get it to work well. Feedback, frozen video, plugins that stop working.  It’s frustrating, and every meeting we lose at least 15 minutes to technical issues.

Someone in the lab commented one day that we always seem to be about a year ahead of software development in our need. Online meetings, exhibit set ups, survey software. Every time we need something, we end up cobbling something together. I’ve decided to take these opportunities as character building and a testament to our skills and talent. Still, it’d be nice to spend time on something else once in a while.

 

My dissertation is slowly rising up from the pile of raw data. After crunching survey data, working on checking transcriptions and of course working some inevitable writing this month, I’m starting the process of coding my video observations of docents interacting with visitors. I’ll be using activity theory and multimodal discourse analysis to unpack those actions, and attempt to decipher the interpretive strategies the docents use to communicate science.

This is the really interesting part for me here because I finally get the chance to break down the interpretive practice I’ve been expecting to see. However, what I’m still trying to work out at the moment is how micro-level I should go when it comes to unpacking the discourse and action I’ve observed. For example, in addition to analyzing what is said in each interaction, how much do I need to break down about how it’s said? For potential interpretative activities, where does that activity begin and end? There’s a lot of decisions to be made here, to which I need to go back to my original research questions for. I’m also in the process of recruiting a couple of additional researchers to code a sample of the data for inter-rater reliability of my analysis.

I’ve also been starting the ball rolling for some potential member check workshops with similar docent communities. The idea is to gather some feedback on my findings with these communities in a couple of months or so. I’ve been looking in to docent communities at varying aquariums in both Oregon and California.

So far so good!