I have just about nailed down a defense date. That means I have about two months to wrap all this up (or warp it, as I originally typed) into a coherent, cohesive, narrative worthy of a doctoral degree. It’s amazing to me to think it might actually be done one of these days.

Of course, in research, there’s always more you can analyze about your data, so in reality, I have to make some choices about what goes in the dissertation and what has to remain for later analysis. For example, I “threw in” some plain world images into the eye-tracking as potential controls just to see how people might look at a world map without any data on it. Not that there really is such a thing; technically any image has some sort of data on it, as it is always representing something, even this one:

 

 

Here, the continents are darker grey than the ocean, so it’s a representation of the Earth’s current land and ocean distinctions.

I also included two “blue marble” images that are essentially images of Earth as if seen from space, without clouds and all in daylight simultaneously, one with the typical northern hemisphere “north-up” orientation, the other “south-up” as the world is often portrayed in Australia, for one. However, I probably don’t have time to analyze all of that right now, at least not and complete the dissertation on schedule. The best dissertation is a done dissertation, not one that is perfect, or answers every single question! If it did, what would the rest of my career be for?

So a big part of the research process is making tradeoffs between how much data to collect so that you do get enough to anticipate any problems you might incur and want to examine about your data, but not so much that you lose sight of your original, specific research questions and get mired in analysis forever. Thinking about what does and doesn’t fit in the particular framework I’ve laid out for analysis, too, is part of this. That means making smart choices about how to sufficiently answer your questions with the data you have and address major potential problems but letting go and letting some questions remain unanswered. At least for the moment. That’s a major task in front of me right now, with both my interview data and my eye-tracking data. At least I’ve finished collecting data for the dissertation. I think.

Let the countdown to defense begin …

My dissertation is slowly rising up from the pile of raw data. After crunching survey data, working on checking transcriptions and of course working some inevitable writing this month, I’m starting the process of coding my video observations of docents interacting with visitors. I’ll be using activity theory and multimodal discourse analysis to unpack those actions, and attempt to decipher the interpretive strategies the docents use to communicate science.

This is the really interesting part for me here because I finally get the chance to break down the interpretive practice I’ve been expecting to see. However, what I’m still trying to work out at the moment is how micro-level I should go when it comes to unpacking the discourse and action I’ve observed. For example, in addition to analyzing what is said in each interaction, how much do I need to break down about how it’s said? For potential interpretative activities, where does that activity begin and end? There’s a lot of decisions to be made here, to which I need to go back to my original research questions for. I’m also in the process of recruiting a couple of additional researchers to code a sample of the data for inter-rater reliability of my analysis.

I’ve also been starting the ball rolling for some potential member check workshops with similar docent communities. The idea is to gather some feedback on my findings with these communities in a couple of months or so. I’ve been looking in to docent communities at varying aquariums in both Oregon and California.

So far so good!

I seem to have gone from walking to speed racing when it comes to projects. Not only do I have the Folklife paper I’m co-authoring for ASEE, but now I’m working on 3 more projects. Just last week I was tasked with doing new analysis on already collected data for a paper draft that’s due at the end of the month. So I’ve been slogging through file after file of the data, trying to make sense of it all so that I can get the analysis done by the end of the week. This is the first time I’ve been asked to do data analysis on data that I was not directly connected with collecting. I’ve always been very familiar with the data I was working with, as well as with the project it’s connected to. I have neither of those safety nets on this project, and it is really testing my abilities. Which is both exciting and terrifying. There is no backup plan if I am unable to get this done, so the pressure is really on. Personally I’m not a fan of pressure, I like to have things well laid out in advance with mini-milestones to keep me on track and keep the task from feeling overwhelming.

I just hope I’m able to rise to the challenge without completely freaking out.

Writing your dissertation seems like the perfect time to learn new software, no? As Laura mentioned, she’s starting to use NVivo for her analysis, and I’m doing the same. It’s a new program for our lab, but already it looks very powerful, combining multiple types of data within the same project. For me, that’s audio, video, and transcripts of course, but I’m also finding that I will be able to link the imagery that I used probably to particular parts of the transcript. That means that I will likely be able to connect those easily in the actual dissertation write up. For me, that could prove incredibly useful as I have so many images that are virtually the same, yet subtlely different, what with the topic and level of scaffolding varying just slightly. I don’t think describing the “levels” of scaffolding in words will be quite the same. It may mean a lot of color images for my dissertation printing, though. Hm, another thing to figure out!

I’m also diving into using the new eyetracking tools, which are also powerful for that analysis, but still tricky in terms of managing licenses across computers when I’m trying to collect data in one place and analyze it in another. We’re certainly epitomizing free-choice learning in that sense, learning in an on-demand fashion to use tools that we want to learn about in order to accomplish specific tasks. One could just wish we had had real data to use these tools with before (or money to purchase them – NVivo and StudioCode, another powerful coding tool for on-the-fly video coding, are not cheap). Between that and the IRB process, I’m realizing this dissertation process is even more broadly about all the associated stuff that comes with doing research (not to mention budgeting, scheduling, grant proposing …) than it is about even the final project and particular findings themselves. I’m sure someone told me this in the beginning, but it’s one of those you don’t believe it until you see it sorts of things.

What “else” have you learned through your research process?

Happy new year everyone!

After all the fun and frivolities of the holiday season, I am left with not only the feeling that I probably shouldn’t have munched all those cookies and candies, but also the grave realization that crunch time for my dissertation has commenced. I’d like to have it completed by Spring and, just like Katie, I’ve hit the analysis phase of my research and am desperately trying not to fall into the pit of never-ending data. All those current and former graduate students out there, I’m sure you can relate to this – all those wonderful hours, weeks and months I have to look forward to of frantically trying to make sense of the vast pool of data I have spent the last year planning for and collecting.

 

But fear not! ’tis qualitative data sir! And seeing as I have really enjoyed working with my participants and collecting data so far, I am going to attempt to enjoy discovering the outcomes of all my hard work. To me, the beauty of working with qualitative data is developing the pictures of the answers to the questions that initiated the research in the first place. It’s a jigsaw puzzle with only knowing a rough idea of what the image might look like at the end – you slowly keep adding the pieces until that image comes clear. I’m looking forward to seeing that image.

So what do I have to analyze? Well, namely ~20 interviews with docents, ~75 docent observations, ~100 visitor surveys and 2 focus groups (which will hopefully take place in the next couple of weeks).  I will be using the  research analysis tool, Nvivo, which will aid me in cross-analyzing the different forms of data using a thematic coding approach – analyzing for reoccuring themes within each data set. What I’m particularly psyched about is getting into the video analysis of the participant observations, whereby I’m finally going to get the chance to unpack some of that docent practice I’ve been harping on about for the last two years. Here, I’ll be taking a little multimodal discourse analysis and a little activity theory to break down docent-visitor interaction and interpretative strategies observed.

Right now, the enthusiasm is high! Let’s see how long I can keep it up 🙂 It’s Kilimanjaro, but there’s no turning back now.

 

And now it comes to this: thesis data analysis. I am doing both qualitative analysis of the interviews and quantitative analysis for the eye-tracking, mostly. However, I will also quantify some of the interview coding and “qualify” the eye-tracking data, mainly while I analyze the paths and orders in which people view the images.

So now the questions become, what exactly am I looking for, and how do I find evidence of it? I have some hypotheses, but they are pretty general at this point. I know that I’m looking for differences between the experts and the non-experts, and among the levels of scaffolding for the non-experts in particular. For the interviews, that means I expect experts will 1) have more correct answers than the non-experts, 2) have different answers from the non-experts about how they know the answers they give, 3) be able to answer all my questions about the images, and 4) have basically similar meaning-making across all levels of scaffolding. This means I have a general idea of where to start coding, but I imagine my code book will change significantly as I go.

With the eye-tracking data, I’ll also be trying to build the model as I go, especially as this analysis is new to our lab. With the help of a former graduate student in the Statistics department, I’ll be starting at the most general differences, again whether the number of fixations (as defined by a minimum dwell time in a maximum diameter area) differ significantly:  1) between experts and non-experts overall with all topics included and all images, 2) between supposedly-maximally-different unscaffolded vs. fully-scaffolded images but with both populations included, and 3) experts looking at unscaffolded vs. non-experts looking at fully-scaffolded images. At this point, I think that there should be significant differences in cases 1 and 2, but hope that, if significant, at least the value of the difference should be smaller in 3, indicating that the non-experts are indeed moving closer to the patterns of experts when given scaffolding. However, this may not reveal itself in the eye-tracking as the populations could make similar meaning as reflected in the interviews but not have the same patterns of eye-movements; that is, it’s possible that the non-experts might be less efficient than experts but still eventually arrive at a better answer with scaffolding than without.

As for the parameters of the eye-tracking, the standard minimum dwell time for a fixation included in our software is 80 ms, and the maximum diameter is 100 pixels, but again, we have no standard for this in the lab so we’ll play around with this and see if results hold up over smaller dwell times or at least smaller diameters, or if they appear. My images are only 800×600 pixels, so a minimal diameter of 1/6th to 1/8th of the image seems rather large. Some of this will be mitigated by the use of areas of interest drawn in the image, where the distance between areas could dictate a smaller minimum diameter, but at this point, all of this remains to be seen and to some extent, the analysis will be very exploratory.

That’s the plan at the moment; what are your thoughts, questions, and/or suggestions?