I have just about nailed down a defense date. That means I have about two months to wrap all this up (or warp it, as I originally typed) into a coherent, cohesive, narrative worthy of a doctoral degree. It’s amazing to me to think it might actually be done one of these days.

Of course, in research, there’s always more you can analyze about your data, so in reality, I have to make some choices about what goes in the dissertation and what has to remain for later analysis. For example, I “threw in” some plain world images into the eye-tracking as potential controls just to see how people might look at a world map without any data on it. Not that there really is such a thing; technically any image has some sort of data on it, as it is always representing something, even this one:

 

 

Here, the continents are darker grey than the ocean, so it’s a representation of the Earth’s current land and ocean distinctions.

I also included two “blue marble” images that are essentially images of Earth as if seen from space, without clouds and all in daylight simultaneously, one with the typical northern hemisphere “north-up” orientation, the other “south-up” as the world is often portrayed in Australia, for one. However, I probably don’t have time to analyze all of that right now, at least not and complete the dissertation on schedule. The best dissertation is a done dissertation, not one that is perfect, or answers every single question! If it did, what would the rest of my career be for?

So a big part of the research process is making tradeoffs between how much data to collect so that you do get enough to anticipate any problems you might incur and want to examine about your data, but not so much that you lose sight of your original, specific research questions and get mired in analysis forever. Thinking about what does and doesn’t fit in the particular framework I’ve laid out for analysis, too, is part of this. That means making smart choices about how to sufficiently answer your questions with the data you have and address major potential problems but letting go and letting some questions remain unanswered. At least for the moment. That’s a major task in front of me right now, with both my interview data and my eye-tracking data. At least I’ve finished collecting data for the dissertation. I think.

Let the countdown to defense begin …

My dissertation is slowly rising up from the pile of raw data. After crunching survey data, working on checking transcriptions and of course working some inevitable writing this month, I’m starting the process of coding my video observations of docents interacting with visitors. I’ll be using activity theory and multimodal discourse analysis to unpack those actions, and attempt to decipher the interpretive strategies the docents use to communicate science.

This is the really interesting part for me here because I finally get the chance to break down the interpretive practice I’ve been expecting to see. However, what I’m still trying to work out at the moment is how micro-level I should go when it comes to unpacking the discourse and action I’ve observed. For example, in addition to analyzing what is said in each interaction, how much do I need to break down about how it’s said? For potential interpretative activities, where does that activity begin and end? There’s a lot of decisions to be made here, to which I need to go back to my original research questions for. I’m also in the process of recruiting a couple of additional researchers to code a sample of the data for inter-rater reliability of my analysis.

I’ve also been starting the ball rolling for some potential member check workshops with similar docent communities. The idea is to gather some feedback on my findings with these communities in a couple of months or so. I’ve been looking in to docent communities at varying aquariums in both Oregon and California.

So far so good!

Happy new year everyone!

After all the fun and frivolities of the holiday season, I am left with not only the feeling that I probably shouldn’t have munched all those cookies and candies, but also the grave realization that crunch time for my dissertation has commenced. I’d like to have it completed by Spring and, just like Katie, I’ve hit the analysis phase of my research and am desperately trying not to fall into the pit of never-ending data. All those current and former graduate students out there, I’m sure you can relate to this – all those wonderful hours, weeks and months I have to look forward to of frantically trying to make sense of the vast pool of data I have spent the last year planning for and collecting.

 

But fear not! ’tis qualitative data sir! And seeing as I have really enjoyed working with my participants and collecting data so far, I am going to attempt to enjoy discovering the outcomes of all my hard work. To me, the beauty of working with qualitative data is developing the pictures of the answers to the questions that initiated the research in the first place. It’s a jigsaw puzzle with only knowing a rough idea of what the image might look like at the end – you slowly keep adding the pieces until that image comes clear. I’m looking forward to seeing that image.

So what do I have to analyze? Well, namely ~20 interviews with docents, ~75 docent observations, ~100 visitor surveys and 2 focus groups (which will hopefully take place in the next couple of weeks).  I will be using the  research analysis tool, Nvivo, which will aid me in cross-analyzing the different forms of data using a thematic coding approach – analyzing for reoccuring themes within each data set. What I’m particularly psyched about is getting into the video analysis of the participant observations, whereby I’m finally going to get the chance to unpack some of that docent practice I’ve been harping on about for the last two years. Here, I’ll be taking a little multimodal discourse analysis and a little activity theory to break down docent-visitor interaction and interpretative strategies observed.

Right now, the enthusiasm is high! Let’s see how long I can keep it up 🙂 It’s Kilimanjaro, but there’s no turning back now.

 

And now it comes to this: thesis data analysis. I am doing both qualitative analysis of the interviews and quantitative analysis for the eye-tracking, mostly. However, I will also quantify some of the interview coding and “qualify” the eye-tracking data, mainly while I analyze the paths and orders in which people view the images.

So now the questions become, what exactly am I looking for, and how do I find evidence of it? I have some hypotheses, but they are pretty general at this point. I know that I’m looking for differences between the experts and the non-experts, and among the levels of scaffolding for the non-experts in particular. For the interviews, that means I expect experts will 1) have more correct answers than the non-experts, 2) have different answers from the non-experts about how they know the answers they give, 3) be able to answer all my questions about the images, and 4) have basically similar meaning-making across all levels of scaffolding. This means I have a general idea of where to start coding, but I imagine my code book will change significantly as I go.

With the eye-tracking data, I’ll also be trying to build the model as I go, especially as this analysis is new to our lab. With the help of a former graduate student in the Statistics department, I’ll be starting at the most general differences, again whether the number of fixations (as defined by a minimum dwell time in a maximum diameter area) differ significantly:  1) between experts and non-experts overall with all topics included and all images, 2) between supposedly-maximally-different unscaffolded vs. fully-scaffolded images but with both populations included, and 3) experts looking at unscaffolded vs. non-experts looking at fully-scaffolded images. At this point, I think that there should be significant differences in cases 1 and 2, but hope that, if significant, at least the value of the difference should be smaller in 3, indicating that the non-experts are indeed moving closer to the patterns of experts when given scaffolding. However, this may not reveal itself in the eye-tracking as the populations could make similar meaning as reflected in the interviews but not have the same patterns of eye-movements; that is, it’s possible that the non-experts might be less efficient than experts but still eventually arrive at a better answer with scaffolding than without.

As for the parameters of the eye-tracking, the standard minimum dwell time for a fixation included in our software is 80 ms, and the maximum diameter is 100 pixels, but again, we have no standard for this in the lab so we’ll play around with this and see if results hold up over smaller dwell times or at least smaller diameters, or if they appear. My images are only 800×600 pixels, so a minimal diameter of 1/6th to 1/8th of the image seems rather large. Some of this will be mitigated by the use of areas of interest drawn in the image, where the distance between areas could dictate a smaller minimum diameter, but at this point, all of this remains to be seen and to some extent, the analysis will be very exploratory.

That’s the plan at the moment; what are your thoughts, questions, and/or suggestions?

How much progress have I made on my thesis in the last month? Since last I posted about my thesis, I have completed the majority of my interviews. Out of 30 I need, I have all but four completed, and three of the four remaining scheduled. Out of about 20 eyetracking sessions, I have completed all but about 7, with probably 3 of the remaining scheduled. I also presented some preliminary findings around the eye-tracking at the Geological Society of America conference in a digital poster session. Whew!

It’s a little strange to have set a desired number of interviews at the beginning and feel like I have to fulfill that and only that number, rather than soliciting from a wide population and getting as many as I could past a minimum. Now, if I were to get a flood of applicants for the “last” novice interview spot, I might want to risk overscheduling to compensate for no-shows (which, as you know, have plagued me). On the other hand, I risk having to cancel if I got an “extra” subject scheduled, which I suppose is not a big deal, but for some reason I would feel weird canceling on a volunteer – would it put them off from volunteering for research in the future??

Next up is processing all the recordings, backing them up, and then getting them transcribed. I’ll need to create a rubric to score the informational answers as something along the lines of 100% correct, partially correct, or not at all correct. Then it will be coding, finding patterns in the data and categorizing those patterns, and asking someone to serve as a fellow coder to verify my codebook and coding once I’ve made a pass through all of the interviews. Then I’ll have to decide if the same coding will apply equally to the questions I asked during the eyetracking portion, since I didn’t dig as deeply to root out understanding completely as I did in the clinical interviews, but I still asked them to justify their answers with “how do you know” questions.

We’ll see how far I get this month.

It seems that a convenience sample really is the only way to go for my project at this stage. I have long entertained the notion that some kind of randomization would work to my benefit in some abstract, cosmic way. The problem is, I’m developing a product for an established audience. As much as I’d like to reach out and get new audiences interested, that will have to come later.

That sounds harsh, which is probably why I hadn’t actually considered it until recently. In reality, it could work toward my larger goal of bringing in new audience members by streamlining the development process.

I’ve discovered that non-gamers tend to get hung up on things that aren’t actually unique to Deme, but are rather common game elements with which they’re not familiar. Imagine trying to design a dashboard GPS system, then discovering that a fair number of your testers aren’t familiar with internal combustion engines and doubt they will ever catch on. I need people who can already drive.

Games—electronic, tabletop or otherwise—come with a vast array of cultural norms and assumptions. Remember the first time you played a videogame wherein the “Jump” button—the button that was just simply always “Jump” on your console of choice—did something other than jump?* It was like somebody sewed your arms where your legs were supposed to be, wasn’t it? It was somehow offensive, because the game designers had violated a set of cultural norms by mapping the buttons “wrong.” There’s often a subtle ergonomic reason that button is usually the “Jump” button, but it has just as much to do with user expectations.

In non-Deme news, we’re all excited to welcome our new Senior Aquarist, Colleen Newberg. She comes to us from Baltimore, but used to work next door at the Oregon Coast Aquarium. I learned last week that she is a Virginian, leaving Sid as the lone Yankee on our husbandry team. We’ve got some interesting things in the works, and Collen has been remarkably cool-headed amidst a torrent of exhibit ideas, new and changing protocols and plumbing eldritch and uncanny.

 

*I’ve personally observed that button-mapping has become less standardized as controllers have become more complex. I could be wrong, though—my gameplay habits do not constitute a large representative sample. Trigger buttons, of course, would be an exception.