A nice article on some of our current efforts came out today in Oregon Sea Grant’s publication, Confluence. You can read the story on-line at http://seagrant.oregonstate.edu/confluence/1-3/free-choice-learning.

One of the hardest things to try to describe to Nathan Gilles who wrote the article (and to the folks who reviewed the draft) is the idea that in order for the lab to be useful to the widest variety of learning sciences researchers, the cyber-technologies on which the museum lab are based have to be useful to researchers coming from a wide range of theoretical traditions. In the original interview, I used the term “theory agnostic” in trying to talk about the data collection tools and the behind-the-scenes database. The idea is that the tools stand alone independent of any given learning theory or framework.

Of course, for anyone who has spent time thinking about it, this is a highly problematic idea. Across the social sciences we recognize that our decisions about what data to collect, how to represent it, and even how we go about collecting it are intimately interwoven with our theoretical claims and commitments. In the same way that our language and symbol systems shape our thinking by streamlining our perceptions of the world (see John Lucy’s work at the University of Chicago for the most cogent explanations of these relationships), our theories about learning, about development, about human interaction and identity shape our research questions, our tools for data collection and the kinds of things we even count as data.

Recognizing this, we struggled early on to develop a way to automate data collection that would serve the needs of multiple researchers coming from multiple frameworks and with interests that might or might not align with our own. For example, we needed to develop a data collection and storage framework that would allow a researcher like John Falk to explore visitor motivation and identity as features of individuals while at the same time allowing a researcher like Sigrid Norris to document visitor motivation and identity as emergent properties of mediated discourse: two very different notions of identity and of best ways to collect data about it being served by one lab and database.

The framework we settled on for conceiving of what kind of data we need to collect for all these researchers from different backgrounds is focused on human action (spoken and non-spoken) and shaped by a mediated action approach to understanding human action. Mediated action as an approach basically foregrounds agents acting in the world through the mediation of cognitive and communicative tools. Furthermore, it recognizes that such mediated action always occurs in concrete contexts. While it is true that mediated action approaches are most often associated with sociocultural theories of learning and Cultural Historical Activity Theory in particular, a mediated action approach itself does not make strong theoretical claims about learning. A mediated action framework means we are constantly striving to collect data on individual agents using physical, communicative, and cognitive tools in concrete contexts often with other agents. In storing and parsing data, we strive to maintain the unity of agent, tools, and context. To what extent this strategy turns out to be theory agnostic or learning theory neutral remains to be seen.

Do you ever look for sand dollars when you walk along the beach?  Or Japanese glass floats?  What about dead birds?  It may sounds strange, but hundreds of people along the West Coast walk up and down the beach looking for dead birds.

Let me explain.   Volunteers in citizen science project COASST (the Coastal Observation and Seabird Survey Team) do monthly beach surveys to monitor seabird mortality.  This is the citizen science group I will be working with for my thesis.  Participants commit to surveying a one-mile stretch of beach every month, and complete a one-day training to lean the protocol for identifying wracked birds.  After each survey, volunteers upload the data and photographs to the program website for independent verification.

Why monitor dead birds?  The COASST program was originally designed in 1998 to collect baseline data about seabird mortality in case there’s an oil spill.  If no one knows what’s “normal” for seabird populations, it might be difficult to create accountability should an oil spill occur.  Over the past thirteen years, COASST data has been used in a variety of scientific studies, including studies on fisheries interactions, harmful algal blooms, genetic studies of Western Grebes (candidate for threatened species status), and potential warning systems for avian flu.

Two weeks ago, a couple COASST volunteers let me join their survey to see what it’s like.  On the drive out to the beach, one volunteer asked me, “How did you hear about COASST?” It turns out that we both first learned of the program in a book called Strand: An Odyssey of Pacific Ocean Debris.  After reading about the COASST program, she looked up when the next training would be held, called her “nerd friend,” and they have been happily identifying and photographing dead birds ever since.

I have to say, this was the most fun I’ve ever had counting dead birds.  We had great weather, beautiful scenery, interesting conversation… what else could you want from a day at the beach?  I am really looking forward to working with the COASST program and volunteers for my thesis.

Last weekend a number of us headed off to the Oregon coast for the FCL annual retreat. This year it was at William H. Tugman state park near Winchester Bay, OR. As true Oregonians, we stayed in yurts and ran our activities outdoors. Although a little chilly (hey, it IS the Oregon coast!), the weather was beautiful and good times were had by all.

 

The FCL retreat is a student-led professional development opportunity involving a number of grad student and social-centered activities. It’s also an opportunity for us to get to know each other a little better, and enjoy some hang-out time for community-building across the FCL-related programs at OSU.  Over 20 people attended this year, including Dr. Rowe, Dr. John Falk and Dr. Lynn Dierking, as well as partners, dogs and babies, which made for an academic as well as all-round family atmosphere! The annual retreat was started last year at the Oregon Hatchery Research Center in Alsea, OR, and we are hoping it will become a tradition for years to come.

 

Activities were centered on a variety of topics, and included

  • Team building
  • Grant writing
  • Sensory drawing
  • Principles of interpretation
  • Working with culturally and linguistically diverse populations
  • Irish dancing
  • Night hiking
  • Yoga
  • Health

Plus, a couple of extra fun campfires and lots of eating! A big thank you to everyone who helped organize and/or participated in the retreat. Some the highlights included creating interpretive sculptures with modeling clay, watching everyone try to dance in unison during Irish dancing whilst falling over their own feet, and learning some crazy new things we never knew about each other in Dr. Dierking’s icebreaker game. We also discovered Laia is amazing at cooking chili over a fire, and Dr. Rowe makes a mean burger!

Check out our photos here. You will also find them on our facebook page.

 

 

For those of you just joining us, I’m developing a game called Deme for my master’s project. It’s a tactical game that models an ecosystem, and it’s meant primarily for adults. I’m studying how people understand the game’s mechanics in relation to the real world, in an effort to better understand games as learning and meaning-making tools.

I stumbled across Roll20, quite by accident, while reading the PA Report. What I like about Roll20 is the fact that your table session can be shared as a link (apparently—I haven’t started digging yet as I only found out about it a few hours ago). Also, each token can be assigned a hit counter. Damage tracking is something of a hassle in Deme’s current incarnation.

I’ll have more to report after I play around with this for a while. Moving the game from one incarnation and environment to another has forced me to think of it as a system, rather than a product. I want Deme to be portable, and a robust system can be used with just about any tabletop, real or virtual. For an example of a game system, see Wizards of the Coast’s d20 System. The d20 System happens to be a handy model for quantizing events and behaviors—handy enough to inform the data collection framework for our observation systems in the Visitor Center.

Of course, Deme cannot be run single-player as a tabletop game. That’s a double-edged sword. A tabletop game (even a virtual one) is an immediate social experience. A single-player game is a social experience too, but it’s an asynchronous interaction between the developer(s) and the player. I rather like the tabletop approach because each species has a literal voice. The unearthly torrent of resulting qualitative data may be tough to sort out, but I think that’s a good problem to have so long as I know what I’m looking for.

At this phase, the tabletop version is still officially—as much as I can make something official—just a pilot product. I don’t know if it will become something more, but I feel like it deserves a shot.

… is what we’ll be doing starting this fall as a group of advisees of Dr. Rowe. As a couple of us near defense time (we hope), it seemed a good time to start a regular discussion of the theories and frameworks most pertinent to what we all do. There are a lot of them; as much as we share interest in science education, we have a lot of different ideas about how to do it for the array of audiences and venues we’re concerned with as well. So expect more along those lines coming up in the blog.

For now, here’s a video of Dr. Rowe introducing his own framework, which of course informs the entire lab agenda:

I’ve wrapped up my work with the NEES REU program, and as my final assignment I wrote a report on the Folklife Festival evaluation. I didn’t have time to do an in depth analysis, but I did enough to report that the activity was overwhelmingly fun, and that people felt like it was worth their time (despite the incredible heat). Based on anecdotal evidence from previous activities with the mini-flume, we weren’t exactly surprised by these results, but confirmation is always nice.

What was surprising showed up in the demographic information. We had the expected breakdown of men and women, race/ethnicity, and even age. But when I tallied highest education level, half of the participants reported having at least a master’s degree. Now I have questions about how and why we got this interesting demographic breakdown. Is the activity more appealing to this demographic? Was the Festival what was more appealing and we just caught the demographic?

Or was there something in my recruitment method that would have resulted in this odd sampling?

Folklife only counted visitors so I don’t have access to the demographics of the larger population, so for now I have no way of answering these questions, but I will keep it in mind as I do more in depth analysis on the Folklife data.