About Katie Stofer

Research Assistant Professor, STEM Education and Outreach, University of Florida PhD, Oregon State University Free-Choice Learning Lab

Are you an educator, paid or volunteer, looking for an opportunity to improve your practice through understanding more about learning theory and evaluating exhibits and programs? You may be interested in OSU’s new professional certificate program, starting at the end of this month.

Contacts
Bridget Egan
Bridget.Egan-at-oregonstate.edu

Free-Choice Learning and Informal Education continuing education certificate
program begins April 1st

March 1, 2013

Registration is now open for the Free-Choice Learning Professional Certificate, an
online program offered by Oregon State University.

The program helps professional and volunteer educators in museums, zoos, aquaria
and educational outreach programs discover more about free-choice learning:
the study of what, where and how we choose to learn over the course of our
lifetimes. Participants will study learning theory and practice using theory and
research towards building their professional practice with in their community.
Participants will also actively engage in practice evaluations of exhibits, programs
and curriculum.

Courses are taught by experienced Oregon State faculty and researchers: Lynn
Dierking, Sea Grant professor and interim associate dean for research in the OSU
College of Education; John H. Falk, Sea Grant professor and interim director of the
Center for Research in Lifelong STEM Learning; Shawn Rowe, marine education and
learning specialist at Oregon Sea Grant Extension; and Jennifer Bachman, instructor
and Free-Choice Learning program coordinator.

Learn more about the Free-Choice Learning Professional Certificate on our website.

OSU Professional and Noncredit Education provides education and training for
businesses, organizations, associations and professionals anywhere throughout the
state and beyond. The majority of professional and noncredit students are focused
on continuing education: licensure recertification, professional development and
personal enrichment. OSU Professional and Noncredit Education is part of the
university’s Division of University Outreach and Engagement.

This innovative program immerses museum, zoo, aquarium and science outreach professionals and volunteers in free-choice learning theories. Participants will work with some of the field’s leading researchers and learn how to apply informal learning environments theories in real-world educational settings. Participants can earn the Free-Choice Learning Professional Certificate by completing each required course for this program or take individual courses without enrolling in the professional certificate program.

SPRING 2013

Designing Learning Environments: Physical dimensions of free-choice learning
Learning is influenced by the interaction of variables within three contexts — personal, socio-cultural and physical. This course focuses on how macro-scale environmental factors, like space, crowding and novelty, and micro-scale environmental factors, like design elements, real objects and different media, support free-choice learning.
Instructors: Shawn Rowe, Ph.D. and Jennifer Bachman, Ph.D.
Launches April 1, 2013

SUMMER 2013
Developing Effective Evaluations
Developing Effective Evaluations is an introductory course that focuses on providing a hands-on approach to effectively assessing/evaluating learning and behavior within the free-choice learning contexts such as museums, national parks, zoos, aquariums and broadcast media. The design and implementation of an evaluation is used as a lens for understanding the hows and whys of assessment and evaluation. This course is designed to help professionals design their own evaluation/assessment research as well as become informed consumers of others’ research.
Instructors: Marianna Adams, Ph.D. and Jennifer Bachman, Ph.D.
Launches June 24, 2013

FALL 2013

Examining The Learner’s Own Ideas: Personal dimensions of free-choice learning
Investigates the fundamental roles that identity, motivation, interest, prior knowledge and experience, and choice and control play in supporting learning and how learning leaders can build on these dimensions of learning in order to successfully engage lifelong learners.
Instructors: John Falk, Ph.D. and Jennifer Bachman, Ph.D.
Launches September 2013

WINTER 2014
Understanding Cultural Influence: Sociocultural dimensions of free-choice learning
Investigates connections between theories of free-choice learning and the fundamental concepts of sociology, social psychology and anthropology: social stratification, social structure and interaction, social institutions, and cultural background.
Instructors:  Lynn Dierking, Ph.D. and Jennifer Bachman, Ph.D.
This course and more electives launch January 2014

 

I have just about nailed down a defense date. That means I have about two months to wrap all this up (or warp it, as I originally typed) into a coherent, cohesive, narrative worthy of a doctoral degree. It’s amazing to me to think it might actually be done one of these days.

Of course, in research, there’s always more you can analyze about your data, so in reality, I have to make some choices about what goes in the dissertation and what has to remain for later analysis. For example, I “threw in” some plain world images into the eye-tracking as potential controls just to see how people might look at a world map without any data on it. Not that there really is such a thing; technically any image has some sort of data on it, as it is always representing something, even this one:

 

 

Here, the continents are darker grey than the ocean, so it’s a representation of the Earth’s current land and ocean distinctions.

I also included two “blue marble” images that are essentially images of Earth as if seen from space, without clouds and all in daylight simultaneously, one with the typical northern hemisphere “north-up” orientation, the other “south-up” as the world is often portrayed in Australia, for one. However, I probably don’t have time to analyze all of that right now, at least not and complete the dissertation on schedule. The best dissertation is a done dissertation, not one that is perfect, or answers every single question! If it did, what would the rest of my career be for?

So a big part of the research process is making tradeoffs between how much data to collect so that you do get enough to anticipate any problems you might incur and want to examine about your data, but not so much that you lose sight of your original, specific research questions and get mired in analysis forever. Thinking about what does and doesn’t fit in the particular framework I’ve laid out for analysis, too, is part of this. That means making smart choices about how to sufficiently answer your questions with the data you have and address major potential problems but letting go and letting some questions remain unanswered. At least for the moment. That’s a major task in front of me right now, with both my interview data and my eye-tracking data. At least I’ve finished collecting data for the dissertation. I think.

Let the countdown to defense begin …

If you’re a fan of “Project Runway,” you’re no doubt familiar with Tim Gunn’s signature phrase. He employs this particularly around the point in each week’s process, where the designers have chosen their fabrics and made at least their first efforts at turning their design into reality. It’s at about this time in the process where the designers have to forge ahead or take the last chance to start over and re-conceptualize.

 

 

This week, it feels like that’s where we are with the FCL Lab. We’re about one-and-a-half years into our five years of funding, and about a year behind on technology development. Which means, we’ve got the ideas, and the materials, but haven’t really gotten as far along as we’d like in the actual putting it together.

For us, it’s a bigger problem, too; the development (in this case, the video booth as well as the exhibit itself) is holding up the research. As Shawn put it to me, we’re spending too much time and effort trying to design the perfect task instead of “making it work” with what we have. That is, we’re going to re-conceptualize and do the research we can do with what we have in place, while still going forward with the technology development, of course.

So, for the video booth, that means that we’re not going to wait to be able to analyze what people reflect on during the experience, but take the chance to use what we have, namely a bunch of materials, and analyze the interactions that *are* taking place. We’re not going to wait to make the tsunami task perfect to encourage what we want to see in the video booth. Instead, we’re going to invite several different folks with different research lenses to take a look at the video we get at the tank itself and let us know what types of learning they’re seeing. From there, we can refine what data we want to collect.

It’s an important lesson in grant proposal writing, too: Once you’ve been approved, you don’t have to stick word-for-word to your plan. It can be modified, in ways big and small. In fact, it’s probably better that way.

Awhile ago, I promised to share some of my experiences in collecting data on visitors’ exhibit use as part of this blog. Now that I’ve actually been back at it for the past few weeks, I thought it might be time to actually share what I’ve found. As it is winter here in the northern hemisphere, our weekend visitation to the Hatfield Visitor Center is generally pretty low. This means I have to time my data collection carefully if I don’t want to spend an entire day waiting for subjects and maybe only collect data on two people. That’s what happened on a Sunday last month; the weather on the coast was lovely, and visitation was minimal. I have been recently collecting data in our Rhythms of the Coastal Waters exhibit, which has additional data collection challenges in that it is basically the last thing people might see before they leave the center, it’s dim because it houses the projector-based Magic Planet, and there are no animals, unlike just about every other corner of the Visitor Center. So, I knocked off early and went to the beach. Then I definitely rescheduled another day I was going to collect data because it was a sunny weekend day at the coast.

On the other hand, on a recent Saturday we hosted our annual Fossil Fest. While visitation was down from previous years, only about 650 compared to 900, this was plenty for me, and I was able to collect data on 13 people between 11:30 and 3:30, despite an octopus feeding and a lecture by our special guest fossil expert. Considering data collection, including recruitment, consent, the experiment, and debrief probably runs 15 minutes, I thought that this was a big win. In addition, I only got one refusal from a group that said they were on their way out and didn’t have time. It’s amazing how much better things go if you a) lead with “I’m a student doing research,” b) mention “it will only take about 5-10 minutes”, and c) don’t record any video of them. I suspect it also helps that it’s not summer, as this crowd is more local and thus perhaps more invested in improving the center, whereas summer tourists might be visiting more for the experience, to say they’ve been there, as John Falk’s museum visitor “identity” or motivation research would suggest. This would seem to me like a motivation that would not make you all that eager to participate. Hm, sounds like a good research project to me!

Another reason I suspect things went well was that I am generally approaching only all-adult groups, and I only need one participant from each group, so someone can watch the kids if they get bored. I did have one grandma get interrupted a couple times, though, by her grandkids, but she was a trooper and shooed them away while she finished. When I was recording video and doing interviews about the Magic Planet, the younger kids in the group often got bored, which made recruiting families and getting good data somewhat difficult, though I didn’t have anyone quit early once they agreed to participate. Also, as opposed to prototyping our salmon forecasting exhibit, I wasn’t asking people to sit down at a computer and take a survey, which seemed to feel more like a test to some people. Or it could have been the exciting new technology I was using, the eye-tracker, that was appealing to some.

Interestingly, I also had a lot of folks observe their partners as the experiment happened, rather than wander off and meet up later, which happened more with the salmon exhibit prototyping, perhaps because there was not much to see if one person was using the exhibit. With the eye-tracking and the Magic Planet, it was still possible to view the images on the globe because it is such a large exhibit. Will we ever solve the mystery of what makes the perfect day for data collection? Probably not, but it does present a good opportunity for reflection on what did and didn’t seem to work to get the best sample of your visitorship. The cameras we’re installing are of course intended to shed some light on how representative these samples are.

What other influences have you seen that affect whether you have a successful or slow day collecting exhibit use data?

 

A reader just asked about our post from nearly a year ago that suggested we’ll start a “jargon board” to define terms that we discuss here on the blog. Where is it?, the reader wanted to know. Well, like many big ideas, sometimes they get dropped in the everyday what’s in front of our faces fire to put out. But astute readers hold us accountable, and for that, we thank you.

So, let’s start that board as a series of posts with the Category: Jargon. With that, let me start with accountability, then. Often, we hear about “being accountable to stakeholders.” Setting aside stakeholders for the moment, what does it mean to “be held accountable”? It can come in various forms,  but most often seems to be providing proof of some sort that you did what you said you would do. TA few weeks ago, for example, a reader asked for the location of the board that we said we would start, and it turns out, we couldn’t provide it (until now). For other times, it may be paying a bill (think of the looming U.S. debt ceiling crisis, in which we are being held accountable for paying bills), or it may be simply providing something (a “deliverable”) on schedule, as when I have to submit my defended and corrected thesis by a particular date in order to graduate this spring, or when you have to turn in a paper to a professor by a certain time in order to get full credit.

In the research world, we are often asked to provide progress reports on a yearly basis to our funders.  Those people or groups to whom we are beholden are one form of stakeholders. They could be the ones holding the purse strings or the ones we’ve committed to delivering an exhibit or evaluation report to as a contractor, making our client the stakeholder. This blog, actually, is the outreach we told the National Science Foundation we’d do to other stakeholders: students, and outreach and research professionals, and serves also as the proof of such outreach. In this case, those stakeholders don’t have any financial interest, but they do want to know what it is we find out, and how we find it out, so we are held accountable via this blog for those two purposes.

All too often accountability is only seen in terms of the consequences of failing to provide proof.

But, I feel like that’s really just scratching the surface of who we’re accountable to, though it gets a lot more murky just how we prove ourselves to those other stakeholders. In fact, even identifying stakeholders thoroughly and completely is a form of proof that often, stakeholders don’t hold us to unless we make a grievous error. As a research assistant, I have obligations to complete the tasks I’m assigned, making me accountable to the project, which is in turn accountable to the funder, which is in turn, accountable to the taxpayers, of which I am one. As part of OSU, we have obligations to perform professionally, and as part of the HMSC Visitor Center, we have obligations to our audience. The network becomes well-entangled very quickly, in fact. Or maybe it’s more like a cross between a Venn diagram and the Russian nesting dolls? In any case, pretty hard to get a handle on. How do you account for your stakeholders, in order to hold yourself or be held accountable? And what other jargon would you like to see discussed here?

The wave tank area was the latest to get its cameras rejiggered and microphones installed for testing, now that the permanent wave tanks are installed. Laura and I had a heck of a time logging in to the cameras to see their online feeds and hear the mics, however. So we did some troubleshooting, since we were using a different laptop for viewing over the web this time, and came up with these browser-related tips for viewing your AXIS camera live feeds through web browsers (when you type the camera’s IP address straight into the address bar of the browser, not when you’re viewing through Milestone software):

When you reach the camera page (after inputting username and password), go to “Setup” in the top menu bar, then “Live View Config” on the left-hand menu:

First, regardless of operating system, set the Stream Profile drop-down to H.264 (this doesn’t affect or matter to what you have set for recording through Milestone, by the way – see earlier posts about server load), and then Default viewer to “AMC” for Windows IE, and “Server Push” for Other Browsers.

Then, to set up your computer:

Windows PCs:
Chrome: You’ll need to install Apple’s QuickTime once for the browser, and then authorize QuickTime for each camera (use the same username and password as when just logging into the camera)
Internet Explorer: you’ll have to install the AXIS codec once you go to the camera page (which may require various ActiveX permissions and other security changes to Windows defaults)
Firefox: Same as for Chrome, since it uses QuickTime, too
Safari: we don’t recommend using Safari on Windows

Mac:

Chrome: QuickTime needs to be installed for Chrome

Firefox: Needs QuickTime installed

Safari: Should be good to go

IE:  Not recommended on a Mac

Basically, we’ve gone to using Chrome whenever we can since it seems to work the best across Windows and Macs both, but if you have a preference for another browser, these options should get both your video and your audio enabled. And hopefully save you a lot of frustration of thinking you installed the hardware wrong …