How much progress have I made on my thesis in the last month? Since last I posted about my thesis, I have completed the majority of my interviews. Out of 30 I need, I have all but four completed, and three of the four remaining scheduled. Out of about 20 eyetracking sessions, I have completed all but about 7, with probably 3 of the remaining scheduled. I also presented some preliminary findings around the eye-tracking at the Geological Society of America conference in a digital poster session. Whew!

It’s a little strange to have set a desired number of interviews at the beginning and feel like I have to fulfill that and only that number, rather than soliciting from a wide population and getting as many as I could past a minimum. Now, if I were to get a flood of applicants for the “last” novice interview spot, I might want to risk overscheduling to compensate for no-shows (which, as you know, have plagued me). On the other hand, I risk having to cancel if I got an “extra” subject scheduled, which I suppose is not a big deal, but for some reason I would feel weird canceling on a volunteer – would it put them off from volunteering for research in the future??

Next up is processing all the recordings, backing them up, and then getting them transcribed. I’ll need to create a rubric to score the informational answers as something along the lines of 100% correct, partially correct, or not at all correct. Then it will be coding, finding patterns in the data and categorizing those patterns, and asking someone to serve as a fellow coder to verify my codebook and coding once I’ve made a pass through all of the interviews. Then I’ll have to decide if the same coding will apply equally to the questions I asked during the eyetracking portion, since I didn’t dig as deeply to root out understanding completely as I did in the clinical interviews, but I still asked them to justify their answers with “how do you know” questions.

We’ll see how far I get this month.

It seems that a convenience sample really is the only way to go for my project at this stage. I have long entertained the notion that some kind of randomization would work to my benefit in some abstract, cosmic way. The problem is, I’m developing a product for an established audience. As much as I’d like to reach out and get new audiences interested, that will have to come later.

That sounds harsh, which is probably why I hadn’t actually considered it until recently. In reality, it could work toward my larger goal of bringing in new audience members by streamlining the development process.

I’ve discovered that non-gamers tend to get hung up on things that aren’t actually unique to Deme, but are rather common game elements with which they’re not familiar. Imagine trying to design a dashboard GPS system, then discovering that a fair number of your testers aren’t familiar with internal combustion engines and doubt they will ever catch on. I need people who can already drive.

Games—electronic, tabletop or otherwise—come with a vast array of cultural norms and assumptions. Remember the first time you played a videogame wherein the “Jump” button—the button that was just simply always “Jump” on your console of choice—did something other than jump?* It was like somebody sewed your arms where your legs were supposed to be, wasn’t it? It was somehow offensive, because the game designers had violated a set of cultural norms by mapping the buttons “wrong.” There’s often a subtle ergonomic reason that button is usually the “Jump” button, but it has just as much to do with user expectations.

In non-Deme news, we’re all excited to welcome our new Senior Aquarist, Colleen Newberg. She comes to us from Baltimore, but used to work next door at the Oregon Coast Aquarium. I learned last week that she is a Virginian, leaving Sid as the lone Yankee on our husbandry team. We’ve got some interesting things in the works, and Collen has been remarkably cool-headed amidst a torrent of exhibit ideas, new and changing protocols and plumbing eldritch and uncanny.

 

*I’ve personally observed that button-mapping has become less standardized as controllers have become more complex. I could be wrong, though—my gameplay habits do not constitute a large representative sample. Trigger buttons, of course, would be an exception.

A nice article on some of our current efforts came out today in Oregon Sea Grant’s publication, Confluence. You can read the story on-line at http://seagrant.oregonstate.edu/confluence/1-3/free-choice-learning.

One of the hardest things to try to describe to Nathan Gilles who wrote the article (and to the folks who reviewed the draft) is the idea that in order for the lab to be useful to the widest variety of learning sciences researchers, the cyber-technologies on which the museum lab are based have to be useful to researchers coming from a wide range of theoretical traditions. In the original interview, I used the term “theory agnostic” in trying to talk about the data collection tools and the behind-the-scenes database. The idea is that the tools stand alone independent of any given learning theory or framework.

Of course, for anyone who has spent time thinking about it, this is a highly problematic idea. Across the social sciences we recognize that our decisions about what data to collect, how to represent it, and even how we go about collecting it are intimately interwoven with our theoretical claims and commitments. In the same way that our language and symbol systems shape our thinking by streamlining our perceptions of the world (see John Lucy’s work at the University of Chicago for the most cogent explanations of these relationships), our theories about learning, about development, about human interaction and identity shape our research questions, our tools for data collection and the kinds of things we even count as data.

Recognizing this, we struggled early on to develop a way to automate data collection that would serve the needs of multiple researchers coming from multiple frameworks and with interests that might or might not align with our own. For example, we needed to develop a data collection and storage framework that would allow a researcher like John Falk to explore visitor motivation and identity as features of individuals while at the same time allowing a researcher like Sigrid Norris to document visitor motivation and identity as emergent properties of mediated discourse: two very different notions of identity and of best ways to collect data about it being served by one lab and database.

The framework we settled on for conceiving of what kind of data we need to collect for all these researchers from different backgrounds is focused on human action (spoken and non-spoken) and shaped by a mediated action approach to understanding human action. Mediated action as an approach basically foregrounds agents acting in the world through the mediation of cognitive and communicative tools. Furthermore, it recognizes that such mediated action always occurs in concrete contexts. While it is true that mediated action approaches are most often associated with sociocultural theories of learning and Cultural Historical Activity Theory in particular, a mediated action approach itself does not make strong theoretical claims about learning. A mediated action framework means we are constantly striving to collect data on individual agents using physical, communicative, and cognitive tools in concrete contexts often with other agents. In storing and parsing data, we strive to maintain the unity of agent, tools, and context. To what extent this strategy turns out to be theory agnostic or learning theory neutral remains to be seen.

I think what finally turned the tide for me in recruitment was emails to specific colleges at OSU. I guess I was confused because I thought I wasn’t allowed to email students, but it seems really I wasn’t allowed to use the “All-Student-Email” list. Sending emails to particular department administrators to forward to their own lists apparently is perfectly kosher, if not exactly completely unbiased recruitment. It did generate a flurry of responses, 50 or so in a few days, with maybe 20% of those guys (going by names only). Email to fraternities, however, seemed to be a dud (I’m not even sure any of them got forwarded), unless it just took a few days for the guys to sign up and I am confusing them with the ones I thought came from department emails.

The best scheduling method so far has been calling those folks who provided a telephone number; I got one on the phone who recalled seeing the doodle poll I sent with available interview times, but he also said he wasn’t sure what it was about. So, despite the end of the sign-up survey noting that a doodle poll would be sent, again, that information seemed to get overlooked.

Another rather wasted effort at recruiting was me sitting with a sign in the Dutch Bros. coffee shop, even when I was offering gift cards to their establishment for participation. I got one guy who was an engineer inquire why I wasn’t signing up engineers, but otherwise, no bites. Ditto for hanging out in the dining hall; one guy eyed the sign but said he wasn’t a Dutch Bros. guy. Cash, it seems, is king, as long as you can convince your funding source you are not laundering money (hint: get receipts).

Now the question is whether all of them will show up. So far, I’ve had one no-show after the phone calls for scheduling. The rest of the week I have about 6 more interviews, which will get me pretty close to finished if all of them show up. I’m sending email reminders the day before, so I’m crossing my fingers.

 

Despite our fancy technology, there are some pieces of data we have to gather the old-fashioned way: by asking visitors. One piece we’d like to know is why visitors chose to visit on this particular occasion. We’re building off of John Falk’s museum visitor motivation and identity work, which began with a survey that asks visitors to rate a series of statements on Likert (1-5) scales as to how applicable they are for them that day, and reveals a rather small set of motives driving the majority of visits. We also have used this framework in a study of three of our local informal science education venues, finding that an abbreviated version works equally well to determine which (if any) of these motivations drives visitors. The latest version, tried at the Indianapolis Museum of Art, uses photos along with the abbreviated number of statements for the visitors to identify their visit motivations.

We’re implementing a version on an iPad kiosk in the VC for a couple of reasons: first, we genuinely want to know why folks are visiting, and want to be able to correlate identity motivations with the automated behavior, timing, and tracking data we collect from the cameras. Second, we hope people will stop long enough for us to get a good reference photo for the facial recognition system. Sneaky, perhaps, but it’s not the only place we’re trying to position cameras for good reference shots. And if all goes well with our signage, visitors will be more aware than ever that we’re doing research, and that it is ultimately aimed at improving their experience. Hopefully that awareness will allay most of the final fears about the embedded research tools that we are hoping will be minimal to start with.

How do we get signs in front of visitors so they will actually read them? Think about how many signs at the front door of your favorite establishment you walk past without reading. How many street signs, billboards, and on-vehicle ads pass through our vision barely a blur? While exhibit designers spend many an hour toiling away to create the perfect signs to offer visitors some background and possible ways to interact with objects, many visitors gloss right over them, preferring to just start interacting or looking in their own way. This may be a fine alternative use for most cases, but in the case of our video research and the associated informed consent that our subjects need to offer, signs at the front door are going to be our best bet to inform visitors but not unduly interrupt their experience, or make museum entry and additional unreasonable burden for visitors or staff. Plus, the video recording is not optional at this point for folks who visit; you can visit and be recorded, or you can’t visit.

Thankfully, we have the benefit of the Exploratorium and other museums who have done video research in certain exhibits and have tested signs at their entrances and the percentage of visitors who subsequently know they’re being recorded for research. Two studies by the Exploratorium staff showed that their signs at entrances to specifically cordoned-off areas stating that videotaping for research was in progress were effective at informing 99% of visitors to the exhibit areas that a) videotaping was happening and b) it was for research. One interesting point is that their testing of the signs themselves and the language on them revealed that the camera icon needed to be rather old-school/highly professional looking to distinguish itself from the average visitor making home movies while visiting a museum and be clearly associated with official research purposes.


Source: store.sony.com via Free-Choice on Pinterest

Never mind the cameras we’re actually using look more like surveillance cameras.

 

So our strategy, crafted with our Institutional Review Board, is several-fold. Signs at the front entrance (and the back entrance, for staff and volunteers, and other HMSC visitors who might be touring the entire research facility for other reasons and popping in to the VC) will feature the large research camera and a few, hopefully succinct and clear words about the reasons we’re doing research, and where to get more information. We also have smaller signs on some of the cameras themselves with a short blurb about the fact that it’s there for research purposes. Next, we’re making handouts for people that will explain in more detail what our research is about and how the videos help us with that work. We’ll also put that information on our web site, and add the address of the video research information to our rack cards and other promotional material we send around town and Oregon. Of course, our staff and volunteers are also being included in the process so they are well-equipped to answer visitor questions.

Then there’s the thorny issue of students. University students who are over 18 who are visiting as part of a required class will have to individually consent due to federal FERPA regulations. We’re working with the IRB to make this as seamless a process as possible. We’ll be contacting local school superintendents to let them know about the research and let them inform parents of any class that will be attending on a field trip. These students on class field trips will be assumed to have parental consent by virtue of having signed school permission slips to attend Hatfield.

Hopefully this will all work. The Exploratorium’s work showed that even most people who didn’t realize they were being recorded were not bothered much by the recording, and even fewer would have avoided the area if they’d actually known before hand. As always, though, it will be a work-in-progress as we get visitor and volunteer feedback and move forward with the research.

Gutwill, J. (2003). “Gaining visitor consent for research II: Improving the posted-sign method.” Curator
46(2): 228-235

Gutwill, J. (2002). “Gaining visitor consent for research: Testing the posted-sign method.” Curator 45(3): 232-238.