I finished the edits and all the various fee-paying and archiving that come along with completing a dissertation. My transcript finally reflects that I completed all the requirements … so now what? I have a research position waiting for me to start in July, but as I alluded to before, what exactly do I research?

In some ways, the possibilities are wide open. I can stick with visualizations, sure, and expand on that into animations, or continue with the in situ work in the musem. I may try to do that with the new camera system at HMSC as a remote data collector, as there is not a nearby spherical system of which I am aware in my new position.

I could also start to examine modeling, a subject that I danced around a bit during the dissertation (I had to write a preliminary exam question on how it related to my dissertation topic). Modeling, simulation, and representation is big in the Next Generation Science Standards, so there’s likely money there.

Another topic of interest dovetails with Laia’s work on public trust and Katie Woollven’s work with nature of science, broader questions of what is meant by “science literacy” and just why science is pushed so hard by proponents of education. I want to know how, when, and most importantly, why, adults search for scientific information. By understanding why people seek information, we can better understand what problems exist in accessing the types of information they need and focus our efforts. A component of this research also could explore identity of non-professionals as scientists or as capable consumers of academic science information.

Finally, I want to know how all this push toward outreach and especially toward asking professional scientists to be involved in or at least fund outreach around their work impacts their professional lives. What do scientists get out of this emphasis on outreach, if anything? I imagine there are a range of responses, from sheer aggravation and resentment to pure joy at getting to share their work. Hopefully there exists a middle ground where researchers recognize the value and even want to participate to some extent in outreach but are frustrated by feeling ill-equipped to do so. That’s where my bread and butter is – in helping them out through designing experiences, training them to help, or delivering the outreach myself, while building in research questions to advance the field at the same time.

Either way, it’s exciting! I hope to be able to blog here from time to time in the future as my work and the lab allows, though I will be officially done at OSU before my next turn to post on my research work. Thanks for listening.

Last week, I talked about our eye-tracking in the science center at the Museums and the Web 2013 conference, as part of a track on Evaluating the Museum. This was the first time I’d attended this conference, and it turned out to be very different from others I’d attended. This, I think, meant that eye-tracking was a little ahead of where the audience of the conference was in some ways and behind in others!

Many of the attendees seemed to be from the art museum world, which has some different and some similar issues to those of science centers – we each have our generally separate professional organizations (American Association of Museums) and (Association of Science and Technology Centers). In fact, the opening plenary speaker, Larry Fitzgerald, made the point that museums should be thinking of ways that they can distinguish themselves from formal schools. He suggested that a lot of the ways museums are currently trying to get visitors to “think” look very much like they ways people think in schools, rather than the ways people think “all the time.” He mentioned “discovery centers” (which I took to mean interactive science centers), as places that are already trying to leverage the ways people naturally think (hmm, free-choice learning much?).

The twitter reaction and tone of other presentations made me think that this was actually a relatively revolutionary idea for a lot of folks there. My sense is that probably that stems from a different institutional culture that prevents much of that, except for places like Santa Cruz Museum of Art and History, where Nina Simon is re-vamping the place around participation of community members.

So, overall, eye-tracking and studying what our visitors do was also a fairly foreign concept; one tweet wondered whether a museum’s mission needed to be visitor-centric. Maybe museums that don’t have to rely on ticket sales can rest on that, but the conference was trying to push a bit that museums are changing, away from places where people come to find the answer, or the truth and instead to be places of participation. That means some museums may also be generally lagging the idea of getting funding to study visitors at all, let alone spending large amounts on “capital” equipment, and since eye-trackers are expensive technologies designed basically only for that purpose, it seemed just a little ahead of where some of the conference participants were. I’ll have to check back in a few years and see h0w things are changing. As we talked about in our lab meeting this morning, a lot of diversity work in STEM free-choice learning is happening not in academia, but in (science) museums. Maybe that will change in a few years, as well, as OSU continues to shape its Science and Mathematics Education faculty and graduate programs.

For my blog post today I have been thinking about many different things. So now that it is time, I am going to proceed with the topic that has mostly entered into my head when thinking about this post – testing. I know that it is not truly a free choice learning topic as testing is often associated with standard school functions, however I want to bring forth that the more experiences you have outside of school should in theory support the success of testing. With that said, I am truly not a fan of standardized testing. Recently I read an article about a teacher who retired in New York after over 20 years of teaching and claimed that he no longer has a profession. This article struck me and made me think of what we do with our research in the free choice learning arena. We try to document various experiences that people have and ponder what meaning it has in their lives. Will this experience help them understand a particular concept better? Will it expand their thinking on a particular area – for example environmental issues – Will the experience of being in a free choice learning setting influence the participant to be more “open” in accepting new experiences such as touching animals in a touch tank or petting zoo? Not sure, and as group we were all looking at various data sets to reflect on these issues.

So how is this related to testing? Well our free choice learning environments are tied to the formal environments in many ways. The participants typically have had some sort of schooling. This helps shape the background knowledge brought forth by the participant. If what I am hearing from my teacher friends is true, as well as the information presented by the recently retired educator, then the experiences that the students are receiving in formal schools are largely focused on standardized testing. UGH! This in my thought process is very limiting. This limits active conversation by the teacher and students, sets an imposed timeline on pre-planned topics presented removing free flow of ideas.

How can we as educators and researcher in the free choice arena use this information when planning and when trying to implement change within the overall educational system? Do we still use any form of testing within our field? How is this testing different from the standardized ones given in the formal setting? Food for thought and hopefully future conversations.

When you have a new idea in a field so steeped in tradition as science or education, as a newcomer, how can you encourage discussion, at the very least, while still presenting yourself as a professional member of your new field? This was at the heart of some discussion that came up this weekend after Shawn and I presented his “Better Presentations” workshop. The HMSC graduate student organization, HsO, was hosting the annual exchange with the University of Oregon’s Oregon Institute of Marine Biology grad students, who work at the UO satellite campus in Charleston, Oregon, a ways south on the coast from Newport.

The heart of Shawn’s presentation is built around learning research that suggests better ways to build your visuals to accompany your professional presentation. For most of the audience, that was slides or posters for scientific research talks at conferences, as part of proposal defenses, or just with one’s own research group. Shawn suggests ways to break out of what has become a pretty standard default: slides crowded with bullet points, at-best illegible and at-worst incomprehensible figures, and in general, too much content crammed onto single slides and into the overall presentation.

The students were eager to hear about the research foundations of his suggestions, but then raised a concern: how far could they go in pushing the envelope without jeopardizing their entry into the field? That is, if they used a Prezi instead of a PowerPoint, would they be dismissed as using a stunt and their research work overlooked, perhaps in front of influential members of their discipline? Or, if they don’t put every step of their methodology on their poster and a potential employer comes by when they aren’t there, how will that employer know how innovative their work is?

Personally, my reaction was to think: do you want to work with these people if that’s their stance? However, I’m in the enviable position of having seen my results work – I have a job offer that really values the sort of maverick thinking (at least to some traditional science educators) that our free-choice/informal approach offers. In retrospect, that’s how I view the lack of response I got from numerous other places I applied to – I wouldn’t have wanted to work with them anyway if they didn’t value what I could bring to the table. I might have thought quite differently if I were still searching for a position at this point.

For the grad student, especially, it struck me that it’s a tough row to hoe. On the one hand, you’re new to the field, eager, and probably brimming with new ideas. On the other, you have to carefully fit those ideas into the traditional structure in order to secure funding and professional advancement. However, how do you compromise without compromising too far and losing that part of you which, as a researcher, tells you to look at the research for guidance?

It occurred to me that I will have to deal with this as I go into my new position which relies on grant funding after the first year. I am thinking about what my research agenda will be, ideally, and how I may or may not have to bend that based on what funding is available. One of my main sources of funding will likely be through helping scientists do their broader impacts and outreach projects, and building my research into those. How able I am to pick and choose projects to fit my agenda as well as theirs remains to be seen, but this conversation brought me around to thinking about that reality.

As Shawn emphasized in the beginning of the talk, the best outreach (and honestly, probably the best project in any discipline, be it science, or business, or government assistance) is designed with the goals and outcomes in mind first, then picking the tools and manner of achieving those goals only afterwards. We sometimes lament the amazing number of very traditional outreach programs that center around a classroom visit, for example, and wonder if we can ever convince the scientists we partner with that there are new, research-based ways of doing things (see Laura’s post on the problems some of our potential partners have with our ways of doing research). I will be fortunate, indeed, if I find partners for funding that believe the same, or at least are willing to listen to what may be a new idea, at least about outreach.

I have been coding my qualitative interview data all in one big fell swoop, trying to get everything done for the graduation deadline. It feels almost like a class project that I’ve put off, as usual, longer than I should have. In having a conversation with another grad student, about timelines, and how I’ve been sitting on this data since oh, November or so (at least a good chunk of it), we speculated about why we don’t tackle it in smaller chunks. One reason for me, I’m sure, is just general fear of failure or whatever drives my general procrastinating and perfectionist tendencies (remember, the best dissertation is a DONE dissertation – we’re not here to save the world with this one project).

However, another reason occurs to me as well; I collected all the data myself and I wonder if I was too close to it in the process of collecting it? I certainly had to prioritize finishing collecting it, considering the struggles I had to get subjects to participate, and delays with IRB, etc. But I wonder if it’s actually been better to leave it all for a while and come back to it. I guess if I had really done the interview coding before the eye-tracking, I might have shaped the eye-tracking interviews a bit differently, but I think the main adjustments I made based on the interviews were sufficient without coding (i.e. I recognized how much the experts were just seeing that the images were all the same and I couldn’t come up with difficult enough tasks for them, really). The other reason to have coded the interviews first would have been to separate my interviewees into high- and low-performing, if the data proved to be that way, so that I could invite sub-groups for the eye-tracking. But I ended up, again due to recruitment issues, just getting whoever I could from my interview population to come back. And now, I’m not really sure there’s any high- or low-performers among the novices anyway – they each seem to have their strengths and weaknesses at this task.

Other fun with coding: I have a mix of basically closed-ended questions that I am scoring with a rubric for correctness, and then open-ended “how do you know” semi-clinical interview questions. Since I eventually repeated some of these questions for the various versions of the scaffolded images, my subjects started to conflate their answers and parsing these things apart is truly a pleasure (NOT). And, I’m up to some 120 codes, and keeping those all in mind as I go is just nuts. Of course, I have just done the first pass, and as I created codes as I went through, I have to turn around and re-code for those particular ones on the ones I coded before I created them, but I still am stressing as to whether I’m finding everything in every transcript, especially the sort of obscure codes. I have one that I’ve dubbed “Santa” because two of my subjects referred to knowing the poles of Earth are cold because they learned that Santa lives at the North Pole where it’s cold. So I’m now wondering if there were any other evidences of non-science reasoning that I missed. I don’t think this is a huge problem; I am fairly confident my coding is thorough, but I’m also at that stage of crisis where I’m not sure any of this is good enough as I draw closer to my defense!

Other fun facts: I also find myself agonizing over what to call codes, when the description is more important. And it’s also a very humbling look at how badly I (feel like I) conducted the interviews. For one thing, I asked all the wrong questions, as it turns out – what I expected people would struggle with, they didn’t really, and I didn’t have good questions ready to probe for what they did struggle with. Sigh. I guess that’s for the next experiment.

The good stuff: I do have a lot of good data about people’s expectations of the images and the topics, especially when there are misunderstandings. This will be important as we design new products for outreach, both the images themselves and the supporting info that must go alongside. I also sorta thought I knew a lot about this data going into the coding, but number of new codes with each subject is surprising, and gratifying that maybe I did get some information out of this task after all. Finally, I’m learning that this is an exercise in throwing stuff out, too – I was overly ambitious in my proposal about all the questions I could answer, and I collected a lot more data than I can use at the moment. So, as is a typical part of the research process, I have to choose what fits the story I need to tell to get the dissertation (or paper, or presentation) done for the moment, and leave the rest aside for now. That’s what all those papers post-dissertation are for, I guess!

What are your adventures with/fears about coding or data analysis? (besides putting it off to the last minute, which I don’t recommend).