Members of the Cyberlab were busy this week.  We set up the multi touch table and touch wall in the Visitors Center and hosted Kate Haley Goldman as a guest researcher.  In preparation for her visit, there were modifications to camera and table placement, tinkering with microphones, and testing the data collection pieces by looking at the video playback.  It was a great opportunity to evaluate our lab setup for other incoming researchers and their data collection needs, and to try things live with the technology of Ideum!

Kate traveled from Washington D.C. to collect data on the interactive content by Open Exhibits displayed on our table.  As the Principal of Audience Viewpoints, Kate conducts research on audiences and learning in museums and informal learning centers.  She is investigating the use of multi touch technology in these settings, and we are thankful for her insight as we implement this exhibit format at Hatfield Marine Science Center.

Watching the video playback of visitor interactions with Kate was fascinating.  We discussed flow patterns around the room based on table placement.  We looked at the amount of stay time at the table depending on program content.  As the day progressed, more questions came up.  How long were visitors staying at the other exhibits, which have live animals, versus the table placed nearby?  While they were moving about the room, would visitors return to the table multiple times?  What were the demographics of the users?  Were they bringing their social group with them?  What were the users talking about?  Was it the technology itself or the content on the table?  Was the technology intuitive to use?

I felt the thrill of the research process this weekend.  It was a wonderful opportunity to “observe the observer” and witness Kate in action.  I enjoyed seeing visitor use of the table and thinking about the interactions between humans and technology.  How effective is it to present science concepts in this format and are users learning something?  I will reflect on this experience as I design my research project around science learning and the use of multi touch technology in an informal learning environment such as Hatfield Marine Science Center.

With IRB approval “just around the corner” (ha!), I’ve been making sure everything is in place so I can hit the ground running once I get the final approval.  That means checking back over my selection criteria for potential interviewees.  For anyone who doesn’t remember, I’m doing phone interviews with COASST citizen science volunteers to see how they describe science, resource management, and their role in each.

 

I had originally hoped to do some fancy cluster analyses to group people using the big pile of volunteer survey data I have.  How were people answering survey questions?  Does it depend on how long people are involved in the program, or how many birds they’ve identified?  … Nope. As far as I could tell, there were no patterns relevant to my research interests.

 

After a lot of digging through the survey data, I felt like I was back at square 1.  Shawn asked me, “Based on what you’re interested in, what information would you NEED to be able to sort people?”  My interview questions focus on people’s definitions of science and resource management, and their description of their role in COASST, science, and resource management.  I expect their responses have a lot to do with their world view, their experience with science, and what they think about the role of science in society.  Unfortunately, these questions were not included in the 2012 COASST volunteer survey.

 

As so often is the case, what I need and what I have are two different things.  When I looked through what I do have, there were several survey questions that are at least somewhat related to my research interest.  I’ve struggled with determining which questions are the most relevant.  Or I should say, I’ve struggled with making sure I’m not creating arbitrary groupings of volunteers and expecting those to hold through the analysis phase of my project.

 

This process of selecting interviewees off survey responses makes me excited to create my own surveys in the future!  That way I could specifically ask questions to help me create groupings.  Until then, I’m trying to make do with what I have!

I have been coding my qualitative interview data all in one big fell swoop, trying to get everything done for the graduation deadline. It feels almost like a class project that I’ve put off, as usual, longer than I should have. In having a conversation with another grad student, about timelines, and how I’ve been sitting on this data since oh, November or so (at least a good chunk of it), we speculated about why we don’t tackle it in smaller chunks. One reason for me, I’m sure, is just general fear of failure or whatever drives my general procrastinating and perfectionist tendencies (remember, the best dissertation is a DONE dissertation – we’re not here to save the world with this one project).

However, another reason occurs to me as well; I collected all the data myself and I wonder if I was too close to it in the process of collecting it? I certainly had to prioritize finishing collecting it, considering the struggles I had to get subjects to participate, and delays with IRB, etc. But I wonder if it’s actually been better to leave it all for a while and come back to it. I guess if I had really done the interview coding before the eye-tracking, I might have shaped the eye-tracking interviews a bit differently, but I think the main adjustments I made based on the interviews were sufficient without coding (i.e. I recognized how much the experts were just seeing that the images were all the same and I couldn’t come up with difficult enough tasks for them, really). The other reason to have coded the interviews first would have been to separate my interviewees into high- and low-performing, if the data proved to be that way, so that I could invite sub-groups for the eye-tracking. But I ended up, again due to recruitment issues, just getting whoever I could from my interview population to come back. And now, I’m not really sure there’s any high- or low-performers among the novices anyway – they each seem to have their strengths and weaknesses at this task.

Other fun with coding: I have a mix of basically closed-ended questions that I am scoring with a rubric for correctness, and then open-ended “how do you know” semi-clinical interview questions. Since I eventually repeated some of these questions for the various versions of the scaffolded images, my subjects started to conflate their answers and parsing these things apart is truly a pleasure (NOT). And, I’m up to some 120 codes, and keeping those all in mind as I go is just nuts. Of course, I have just done the first pass, and as I created codes as I went through, I have to turn around and re-code for those particular ones on the ones I coded before I created them, but I still am stressing as to whether I’m finding everything in every transcript, especially the sort of obscure codes. I have one that I’ve dubbed “Santa” because two of my subjects referred to knowing the poles of Earth are cold because they learned that Santa lives at the North Pole where it’s cold. So I’m now wondering if there were any other evidences of non-science reasoning that I missed. I don’t think this is a huge problem; I am fairly confident my coding is thorough, but I’m also at that stage of crisis where I’m not sure any of this is good enough as I draw closer to my defense!

Other fun facts: I also find myself agonizing over what to call codes, when the description is more important. And it’s also a very humbling look at how badly I (feel like I) conducted the interviews. For one thing, I asked all the wrong questions, as it turns out – what I expected people would struggle with, they didn’t really, and I didn’t have good questions ready to probe for what they did struggle with. Sigh. I guess that’s for the next experiment.

The good stuff: I do have a lot of good data about people’s expectations of the images and the topics, especially when there are misunderstandings. This will be important as we design new products for outreach, both the images themselves and the supporting info that must go alongside. I also sorta thought I knew a lot about this data going into the coding, but number of new codes with each subject is surprising, and gratifying that maybe I did get some information out of this task after all. Finally, I’m learning that this is an exercise in throwing stuff out, too – I was overly ambitious in my proposal about all the questions I could answer, and I collected a lot more data than I can use at the moment. So, as is a typical part of the research process, I have to choose what fits the story I need to tell to get the dissertation (or paper, or presentation) done for the moment, and leave the rest aside for now. That’s what all those papers post-dissertation are for, I guess!

What are your adventures with/fears about coding or data analysis? (besides putting it off to the last minute, which I don’t recommend).

Last month I wrote about Literacy in the 21st Century and the wonderful new project evaluation I’m working on, Project SEAL. I first want to share a blog post that the Model Classroom team wrote about their time with the Project SEAL teachers during the professional development in February. http://www.modelclassroom.org/blog/2013/03/projectsealoregonpd-intro.html. It has a wonderful synopsis of the two days as well as some teacher reflections.

Since the February professional development, I have turned my attention to the family literacy nights. I have never attended a family literacy night. They were not part of my K-12 experience and I have never heard of or seen them as a researcher/evaluator. The Project SEAL team told me that literacy nights can differ greatly and they did not have standards for the schools to follow for these events. This presented some troubles with me as an evaluator. How can you standardize an evaluation tool for something that looks different each time?

After having some conversations with the Project SEAL team, we decided on a short and sweet survey. Something parents would be willing to fill out throughout the night and something that would focus on literacy, ocean science resource use, as well as structure of the event. We hope that these literacy nights 1) lead to families checking out ocean-related books (purchased for the libraries through the grant), 2) give parents an opportunity to see technology that is being incorporated into literacy (the grant also bought a classroom set of iPad mini’s for each school), and 3) give teachers and students time to present on learning experiences they’ve had with the iPads and new reading material available in the library. Here are the questions on the Family Literacy Night survey.

1) What was your (or your child’s) favorite part of this Family Literacy Night?

2) What went well during this Family Literacy Night?

3) What suggestions for improvement do you have for future Family Literacy Nights?

4)What did you hope to take away from tonight’s Family Literacy Night?  (check all that apply)

More activities and games to do at home

Information on what is being done in my child’s classroom

Information on assessment in reading and writing

Information about how children learn to read and write

Information on how to work with the school and my child’s teacher

New resources available in the library

Ways to use technology with my child at home

How my child’s class has been using library resources

5)You or your child have checked out ocean science resources to read together at home.

6) Your child presented or talked about a class project at this Family Literacy Night.

7) You learned what you wanted to learn tonight.      Agree / Neutral / Disagree

8)Tonight I gained new information about ocean science resources available to my child through his/her school library.     Agree / Neutral / Disagree

Hopefully the data can be useful in proving the effectiveness of this project but also give the schools some ideas for future family literacy nights.

I have just about nailed down a defense date. That means I have about two months to wrap all this up (or warp it, as I originally typed) into a coherent, cohesive, narrative worthy of a doctoral degree. It’s amazing to me to think it might actually be done one of these days.

Of course, in research, there’s always more you can analyze about your data, so in reality, I have to make some choices about what goes in the dissertation and what has to remain for later analysis. For example, I “threw in” some plain world images into the eye-tracking as potential controls just to see how people might look at a world map without any data on it. Not that there really is such a thing; technically any image has some sort of data on it, as it is always representing something, even this one:

 

 

Here, the continents are darker grey than the ocean, so it’s a representation of the Earth’s current land and ocean distinctions.

I also included two “blue marble” images that are essentially images of Earth as if seen from space, without clouds and all in daylight simultaneously, one with the typical northern hemisphere “north-up” orientation, the other “south-up” as the world is often portrayed in Australia, for one. However, I probably don’t have time to analyze all of that right now, at least not and complete the dissertation on schedule. The best dissertation is a done dissertation, not one that is perfect, or answers every single question! If it did, what would the rest of my career be for?

So a big part of the research process is making tradeoffs between how much data to collect so that you do get enough to anticipate any problems you might incur and want to examine about your data, but not so much that you lose sight of your original, specific research questions and get mired in analysis forever. Thinking about what does and doesn’t fit in the particular framework I’ve laid out for analysis, too, is part of this. That means making smart choices about how to sufficiently answer your questions with the data you have and address major potential problems but letting go and letting some questions remain unanswered. At least for the moment. That’s a major task in front of me right now, with both my interview data and my eye-tracking data. At least I’ve finished collecting data for the dissertation. I think.

Let the countdown to defense begin …