About Katie Stofer

Research Assistant Professor, STEM Education and Outreach, University of Florida PhD, Oregon State University Free-Choice Learning Lab

Yes, we failed to change the default password on the cameras we installed. Someone managed to get ahold of the IP addresses, and guess the login and password. We escaped with only minor headaches, as all that happened was that they uploaded a few “overlay” images that appeared on some of the camera feeds, and a few text messages that seemed to be mostly warning messages to us about cybersecurity.

The hacker did change a few of our passwords for the cameras, so there were some from which we could not just delete the images. This has meant various levels of hassle to reset the cameras to default. For the white brick cameras, 30 seconds of holding a control button while the power cycles was sufficient. I didn’t even have to reset the IP address. For the dome cameras, it’s a bit more complex, as the IP address has to be reset, and I wasn’t around for that part originally so I’ll have to consult IT.

However, it makes us wonder about the wisdom of having even the camera views available without a password on the web, which we hadn’t considered was available before. You’d have to have the IP address to go to the view, but once you were there, our IP addresses are mostly sequential (depending on the day and which cameras are installed), so you could go visit each of them if you liked. There seems to be an option to turn this off, however, which I have also gone through and switched so that now you need not only the IP address, but the username and password in order to even view the feed.

Moral of this part of the story? Explore the default settings and consider what they truly mean. Be a Nervous Nellie and a bit of a cynic, assume the worst so you can plan for it.

UPDATE 5/16/13: I couldn’t get the 3301 dome cameras reset despite following the unplug, hold control button, re-plug power sequence. Our IT specialist thinks the hacker may have actually reset the default password via the firmware, since they should have automatically reset themselves to the same IP addresses using DHCP. So those two cameras have been pulled and replaced while the hacked ones are off to the IT hospital for some sleuthing and probably a firmware reset as well. I’ll let you know what the resolution is.

Both Laura and I defend next week, which is why the blog has been a little quiet of late. So, hopefully, it’s the end of our dissertations, and the beginning (or really, continuations) of careers working to create fun and engaging science learning opportunities for all. We both came into the program with a lot of years of actually doing outreach, with a little bit of experience in designing programs and even less in evaluating them. Now we’re set to leave with a great set of tools to maximize these programs and hopefully share the ideas we’ve learned with the broader field as we go.

So that’s set us to thinking about where we go from here. Now I have to build a broader research project that maybe builds off of the dissertation, but the dissertation was so self-contained, and relatively concrete in a way, that the idea of being able to do multiple things again is a bit daunting. I’m almost not sure where to begin! I will have some structure, of course, provided by the grant funding I get, and the partnerships I join. However, it’s important to think about what I want to achieve before I worry about the tools with which to do it – as always, start with the outcomes and work backwards.

It’s fortunate, then, that the lab group has started to discuss our broader research interests with the hopes of finding where they intersect in order to guide future discussions. We’ve been using prezi, creating frames for each sort of focus, then intending to “code” these frames by grouping those with similar topics and ideas. For example, one of my interests at this point is everyday scientist adults keeping current with professional science research developments, for purposes of using that information in their own personal and societal decisions, or simply for keeping tabs on how tax dollars are put to work, or for any other purpose they so desire. So, I’m interested in the hows, whens, and whys of everyday scientists accessing professional science information. This means I overlap with others in the groups working with museum exhibits, but also with people interested in public dialogue events, and in general, the affordances and constraints around learning in these ways.

As the leader of the group, Shawn has mentioned that this has been an exercise he’s used to think about his broader research goals as well, simply writing down his areas of focus, looking back at what he’s done over the past few years, and looking forward to where he wants to go. It also helps him to see what’s matched with his previous plans, and how circumstances or opportunities have changed those plans. I’m grateful to have this fortuitously-timed example of long-term goal setting and building a broader agenda, especially in such a small field where it’s likely that this is the largest group of collaborators in one place that I’ll have for a while. Hopefully, though, I’ll have my own graduate students before too long and maybe even other colleagues who focus on outside-of-school learning as well.

What sorts of tools do you use for figuring out long-term, broad, and somewhat abstract research goals?

Last week, I talked about our eye-tracking in the science center at the Museums and the Web 2013 conference, as part of a track on Evaluating the Museum. This was the first time I’d attended this conference, and it turned out to be very different from others I’d attended. This, I think, meant that eye-tracking was a little ahead of where the audience of the conference was in some ways and behind in others!

Many of the attendees seemed to be from the art museum world, which has some different and some similar issues to those of science centers – we each have our generally separate professional organizations (American Association of Museums) and (Association of Science and Technology Centers). In fact, the opening plenary speaker, Larry Fitzgerald, made the point that museums should be thinking of ways that they can distinguish themselves from formal schools. He suggested that a lot of the ways museums are currently trying to get visitors to “think” look very much like they ways people think in schools, rather than the ways people think “all the time.” He mentioned “discovery centers” (which I took to mean interactive science centers), as places that are already trying to leverage the ways people naturally think (hmm, free-choice learning much?).

The twitter reaction and tone of other presentations made me think that this was actually a relatively revolutionary idea for a lot of folks there. My sense is that probably that stems from a different institutional culture that prevents much of that, except for places like Santa Cruz Museum of Art and History, where Nina Simon is re-vamping the place around participation of community members.

So, overall, eye-tracking and studying what our visitors do was also a fairly foreign concept; one tweet wondered whether a museum’s mission needed to be visitor-centric. Maybe museums that don’t have to rely on ticket sales can rest on that, but the conference was trying to push a bit that museums are changing, away from places where people come to find the answer, or the truth and instead to be places of participation. That means some museums may also be generally lagging the idea of getting funding to study visitors at all, let alone spending large amounts on “capital” equipment, and since eye-trackers are expensive technologies designed basically only for that purpose, it seemed just a little ahead of where some of the conference participants were. I’ll have to check back in a few years and see h0w things are changing. As we talked about in our lab meeting this morning, a lot of diversity work in STEM free-choice learning is happening not in academia, but in (science) museums. Maybe that will change in a few years, as well, as OSU continues to shape its Science and Mathematics Education faculty and graduate programs.

When you have a new idea in a field so steeped in tradition as science or education, as a newcomer, how can you encourage discussion, at the very least, while still presenting yourself as a professional member of your new field? This was at the heart of some discussion that came up this weekend after Shawn and I presented his “Better Presentations” workshop. The HMSC graduate student organization, HsO, was hosting the annual exchange with the University of Oregon’s Oregon Institute of Marine Biology grad students, who work at the UO satellite campus in Charleston, Oregon, a ways south on the coast from Newport.

The heart of Shawn’s presentation is built around learning research that suggests better ways to build your visuals to accompany your professional presentation. For most of the audience, that was slides or posters for scientific research talks at conferences, as part of proposal defenses, or just with one’s own research group. Shawn suggests ways to break out of what has become a pretty standard default: slides crowded with bullet points, at-best illegible and at-worst incomprehensible figures, and in general, too much content crammed onto single slides and into the overall presentation.

The students were eager to hear about the research foundations of his suggestions, but then raised a concern: how far could they go in pushing the envelope without jeopardizing their entry into the field? That is, if they used a Prezi instead of a PowerPoint, would they be dismissed as using a stunt and their research work overlooked, perhaps in front of influential members of their discipline? Or, if they don’t put every step of their methodology on their poster and a potential employer comes by when they aren’t there, how will that employer know how innovative their work is?

Personally, my reaction was to think: do you want to work with these people if that’s their stance? However, I’m in the enviable position of having seen my results work – I have a job offer that really values the sort of maverick thinking (at least to some traditional science educators) that our free-choice/informal approach offers. In retrospect, that’s how I view the lack of response I got from numerous other places I applied to – I wouldn’t have wanted to work with them anyway if they didn’t value what I could bring to the table. I might have thought quite differently if I were still searching for a position at this point.

For the grad student, especially, it struck me that it’s a tough row to hoe. On the one hand, you’re new to the field, eager, and probably brimming with new ideas. On the other, you have to carefully fit those ideas into the traditional structure in order to secure funding and professional advancement. However, how do you compromise without compromising too far and losing that part of you which, as a researcher, tells you to look at the research for guidance?

It occurred to me that I will have to deal with this as I go into my new position which relies on grant funding after the first year. I am thinking about what my research agenda will be, ideally, and how I may or may not have to bend that based on what funding is available. One of my main sources of funding will likely be through helping scientists do their broader impacts and outreach projects, and building my research into those. How able I am to pick and choose projects to fit my agenda as well as theirs remains to be seen, but this conversation brought me around to thinking about that reality.

As Shawn emphasized in the beginning of the talk, the best outreach (and honestly, probably the best project in any discipline, be it science, or business, or government assistance) is designed with the goals and outcomes in mind first, then picking the tools and manner of achieving those goals only afterwards. We sometimes lament the amazing number of very traditional outreach programs that center around a classroom visit, for example, and wonder if we can ever convince the scientists we partner with that there are new, research-based ways of doing things (see Laura’s post on the problems some of our potential partners have with our ways of doing research). I will be fortunate, indeed, if I find partners for funding that believe the same, or at least are willing to listen to what may be a new idea, at least about outreach.

I have been coding my qualitative interview data all in one big fell swoop, trying to get everything done for the graduation deadline. It feels almost like a class project that I’ve put off, as usual, longer than I should have. In having a conversation with another grad student, about timelines, and how I’ve been sitting on this data since oh, November or so (at least a good chunk of it), we speculated about why we don’t tackle it in smaller chunks. One reason for me, I’m sure, is just general fear of failure or whatever drives my general procrastinating and perfectionist tendencies (remember, the best dissertation is a DONE dissertation – we’re not here to save the world with this one project).

However, another reason occurs to me as well; I collected all the data myself and I wonder if I was too close to it in the process of collecting it? I certainly had to prioritize finishing collecting it, considering the struggles I had to get subjects to participate, and delays with IRB, etc. But I wonder if it’s actually been better to leave it all for a while and come back to it. I guess if I had really done the interview coding before the eye-tracking, I might have shaped the eye-tracking interviews a bit differently, but I think the main adjustments I made based on the interviews were sufficient without coding (i.e. I recognized how much the experts were just seeing that the images were all the same and I couldn’t come up with difficult enough tasks for them, really). The other reason to have coded the interviews first would have been to separate my interviewees into high- and low-performing, if the data proved to be that way, so that I could invite sub-groups for the eye-tracking. But I ended up, again due to recruitment issues, just getting whoever I could from my interview population to come back. And now, I’m not really sure there’s any high- or low-performers among the novices anyway – they each seem to have their strengths and weaknesses at this task.

Other fun with coding: I have a mix of basically closed-ended questions that I am scoring with a rubric for correctness, and then open-ended “how do you know” semi-clinical interview questions. Since I eventually repeated some of these questions for the various versions of the scaffolded images, my subjects started to conflate their answers and parsing these things apart is truly a pleasure (NOT). And, I’m up to some 120 codes, and keeping those all in mind as I go is just nuts. Of course, I have just done the first pass, and as I created codes as I went through, I have to turn around and re-code for those particular ones on the ones I coded before I created them, but I still am stressing as to whether I’m finding everything in every transcript, especially the sort of obscure codes. I have one that I’ve dubbed “Santa” because two of my subjects referred to knowing the poles of Earth are cold because they learned that Santa lives at the North Pole where it’s cold. So I’m now wondering if there were any other evidences of non-science reasoning that I missed. I don’t think this is a huge problem; I am fairly confident my coding is thorough, but I’m also at that stage of crisis where I’m not sure any of this is good enough as I draw closer to my defense!

Other fun facts: I also find myself agonizing over what to call codes, when the description is more important. And it’s also a very humbling look at how badly I (feel like I) conducted the interviews. For one thing, I asked all the wrong questions, as it turns out – what I expected people would struggle with, they didn’t really, and I didn’t have good questions ready to probe for what they did struggle with. Sigh. I guess that’s for the next experiment.

The good stuff: I do have a lot of good data about people’s expectations of the images and the topics, especially when there are misunderstandings. This will be important as we design new products for outreach, both the images themselves and the supporting info that must go alongside. I also sorta thought I knew a lot about this data going into the coding, but number of new codes with each subject is surprising, and gratifying that maybe I did get some information out of this task after all. Finally, I’m learning that this is an exercise in throwing stuff out, too – I was overly ambitious in my proposal about all the questions I could answer, and I collected a lot more data than I can use at the moment. So, as is a typical part of the research process, I have to choose what fits the story I need to tell to get the dissertation (or paper, or presentation) done for the moment, and leave the rest aside for now. That’s what all those papers post-dissertation are for, I guess!

What are your adventures with/fears about coding or data analysis? (besides putting it off to the last minute, which I don’t recommend).

While we don’t yet have the formal guest researcher program up and running, we did have a visit from our collaborator Jarrett Geenan this week. He’s working with Sigrid Norris on multimodal discourse analysis, and he was in the U.S. for an applied linguistics conference,  so he “stopped by” the Pacific Northwest on his way back from Dallas to New Zealand. Turns out his undergraduate and graduate work so far in English and linguistics is remarkably similar to Shawn’s. Several of the grad students working with Shawn managed to have lunch with him last week, and talk about our different research projects, and life as a grad student in the States vs. Canada (where he’s from), England (Laura’s homeland), and New Zealand.

We also had a chance to chat about the video cameras. He’s still been having difficulty downloading anything useful, as things just come in fits and starts. We’re not sure how the best way to go about diagnosing the issues will be (barring a trip for one of us to be there in person), but maybe we can get the Milestone folks on a screenshare or something. In the meantime, it led us to a discussion of what might be a larger issue, that of just collecting data all the time and overtaxing the system unnecessarily. It came up with the school groups – is it really that important to just have the cameras on constantly to get a proper, useful longitudinal record? We’re starting to think no, of course, and the problems Jarrett is having makes it more likely that we will think about just turning the cameras on when the VC is open using a scheduling function.

The other advantage is that this will give us like 16-18 hours a day to actually process the video data, too, if we can parse it so that the automated analysis that needs to be done to allow the customization of exhibits can be done in real-time. That would leave anything else, such as group association, speech analysis, and the other higher-order stuff for the overnight processing. We’ll have to work with our programmers to see about that.

In other news, it’s looking highly likely that I’ll be working on the system doing my own research when I graduate later this spring, so hopefully I’ll be able to provide that insider perspective having worked on it (extensively!) in person at Hatfield and then going away to finish up the research at my (new) home institution. That and Jarrett’s visit in person may be the kick-start we need to really get this into shape for new short-term visiting scholars.