Normally, once the majority of undergrads have finished their third term finals and graduation is a teary memory, there is a calm that overcomes campus that those of us here year-round have come to expect. Don’t get me wrong, the undergrads are the reason for this institution, but there are an awful lot of them (and more each year).

However, this year, I’m in a bit of a pickle. My study is specifically trying to target the general adult public, that is, those with a high-school degrees but maybe not a lot of undergraduate experience. At Hatfield, we generally have a slightly higher-educated population than I need. Also, my experiment takes an hour, so the casual visitor is unlikely to break from their group for that long. And my local campus population, at least my local congregation of said population, has just skedaddled for the summer.

So I’m busy trying to find anywhere those remaining might be: the very few residence halls that remain open (which also requires navigating some tricky flier-posting rules), the occasionally-open dining halls, lecture halls which might still have classes and bulletin boards nearby and the library, MU, and some local eateries. I’m also heading to the community college where many students are cross-registered and where they might be knocking out some core courses during the summer on the cheap(er). Hm, maybe I should post a flier at the local store where my incentive gift cards are from? In truth, this is really still just a convenience sample, as I am not plastering fliers all over Corvallis, let alone anywhere else in the state or country. At least, not at this point …

Any ideas welcome! Where do you go to find your research subjects?

I’ve started trying to recruit subjects, including first-year undergraduates and oceanography faculty, for my dissertation work. And it’s a slooooow process, partly due to poor timing, partly due to everyone being busy.

It’s poor timing because it’s finals week here at OSU. Not only are students consumed (rightly) by those end-of-term tests, it’s finals right before summer break. So on top of everything, people are scattering to the four winds, frantically moving out of apartments, and scrambling to start summer jobs or summer classes. So, I’m being patient and trying to think creatively about recruiting a fairly broad, random sample of folks. So far I’m posting flyers in classroom buildings and dining and residence halls, but my next step may be standing in front of the library or student union with all the folks trying to get initiatives on the ballot for November.

The faculty are another story, of course. I can easily get contact information and even fairly detailed information about their experience with visualizations. However, they are also quickly scattering for summer research cruises – two I called just today are leaving early next week for 10 days to 3 weeks. Luckily, I got them to agree to participate when they get back. I’m still getting several brush offs of “too busy.” So my tactic to fight back here is to find other professors they know who can make me less of a ‘generic’ grad student and thus somewhat harder to say no to.

All in all, it’s rejection the same as in exhibit evaluation, just with different excuses.

Stay tuned!

Our actual eyetracker is a bit backordered, so we’ve got a rental for the moment. It’s astoundingly unassuming looking, just (as they picture on their web site) basically a small black bar at the bottom of a 22” monitor, plus the laptop to run the programs. When I took it out of the box, it fires up the operating system and there are the icons just sitting on the desktop, with a little warning that we shouldn’t mess with any settings, install a firewall or anti-virus software for risk of messing up the primary function. They have branded the screen with a little decal from their company, but otherwise, it’s just a laptop with an attached monitor.

 

The actual getting started is a bit complicated.  I’m usually the one to pooh-pooh the need for “readme” documents, but I would have liked one here to tell me which program is which. That’s the thing – the software is powerful, but it has a bit of a steep learning curve. The “quick start” guide has several steps before you even think about calibrating a subject. We got stuck on the requirement to get Ethernet hooked up since we tried to set up in a tech closet and OSU otherwise has pretty widespread wireless coverage. Harrison had to run a 50’ cable from Mark’s office down the hallway to the closet.

 

Looks like the next step is some pretty intense work understanding how to set up an experiment in a different software program. This is where a “test” experiment just to start learning how to use the system would be good. That’s the sort of icon I need in the middle of the desktop. It reminds me of my first job as a research assistant, where I was registering brain images to a standard. The researchers had written a program to rotate the images to line up and then match certain features to the standard to stretch or compact the images as necessary, but there was no manual or quick start. My supervisor had to show me all the steps, what button did what, which order, etc. It was a fairly routine process, but it was all kept in someone’s head until I wrote it down. The pdfs here are a great start, but there still seems to be a step missing. Stay tuned!

 

Prototyping describes the process of creating a first-version exhibit, then testing it out with visitors, and redesigning. Often, we iterate this several times, depending on monetary and time budgets. It’s usually a fruitful way to find out not only what buttons confuse people, but also what they enjoy playing with and what great ideas totally bomb with users.

The problem with prototyping, as with many data collection processes, is that you have to ask the right questions to get useful answers. We are currently re-developing an interactive about how scientists use ocean data to make predictions about salmon populations for future harvests. The first round surveys revealed some areas of content confusion and some areas of usability confusion. Usability confusion is easy to re-work usually, but content confusion is harder to resolve, especially if your survey questions were confusing to the visitors.

This was unfortunately the case with the survey I made up, despite a few rounds of re-working it with colleagues. The survey had multiple-choice questions which were fairly straightforward, but it was the open-ended questions that tripped people up, making the results a bit harder to interpret and know what to do with. The moral of the story? Prototype (a.k.a. pilot) your survey, too!

Friday we continued our perfect technology quest, this time focusing on audio. While we actually want the cameras to capture the video in an overlapping manner, so that we can track visitors from one spot to another and be able to see their faces no matter what angle they face, it turns out that the audio is a different matter. Due to the acoustics in the Center, if we’re not careful, a mic at the front desk will pick up voices 25 feet away at the wave tank, not only muddling the audio we want to hear from the front desk, but also perhaps turning on extra cameras and recording unrelated video.

In order to localize the audio to particular people and in order to understand speech clearly, we’ll use so-called near field recording (up-close to the speaker rather than capturing a whole room). We’ll also need to input multiple mics into certain cameras in order to have audio coverage with minimal wiring in the way of exhibits. Beyond that, though, was the question of what kind of pickup pattern we need – whether the mic records audio straight in front of it, in front and behind, or all around, for example.

With help from audio technicians from the main campus who were out to work the retirement of one NOAA research vessel and the welcoming of another, we discussed the ins-and-outs of particular shapes of recording areas. Probably our best bet in most cases will be a carotid, or heart-shaped, mic, which gets mostly what’s in front of the mic, but not in a straight line, and some of what’s behind the mic. The exact sizes of the patterns can often be tuned, which in our case again will be crucial as we begin to determine how visitors use particular exhibits, where they stand when they talk to one another, and especially how they might move up and down as they interact with people of different ages and heights.

As usual, one of our biggest challenges is trying to retrofit this recording equipment into an already built space, and a space built with weird angles, less-than-optimal acoustics, somewhat unpredictable speaker locations, and often loud but inconsistent ambient noise such as the 65-decibel running water in the touch pools. But hey, that’s why we’re trying it, to see if it’s even possible and beyond possible, helpful to our research.

Harrison used an interesting choice of phrase in his last post: “time-tested.” I was just thinking as I watched the video they produced, including Bill’s dissection, that I don’t know what we’ve done to rigorously evaluate our live programming at Hatfield. But it is just this sort of “time-tested” program that our research initiatives are truly trying to sort out and put to the test. Time has proven its popularity, data is necessary to prove its worth as a learning tool. A very quick survey of the research literature doesn’t turn up much, though some science theater programming was the subject of older studies. Live tours are another related program that could be ripe for investigation.

We all know, as humans who recognize emotions in others, how much visitors enjoy these sorts of programs and science shows of all types. However, we don’t always apply standards to our observations, such as measuring specific variables to answer specific questions. We have a general sense of “positive affect” in our visitors, but we don’t have any data in the form of examples of quotes or interviews with visitors to back up our thoughts. Yet.

A good example of another need for this was in a recent dissertation defense here at OSU. Nancy Staus’ research looked at learning from a live program, and she interviewed visitors after watching a program at a science center. She found, however, that the presenter of the program had a lot of influence on the learning simply by the way they presented the program: visitors recalled more topics and more facts about each topic when the presentation was more interactive than scripted. She wasn’t initially interested in differences of this sort, but because she’d collected this sort of data on the presentations, she was able to locate a probable cause for a discrepancy she noted. So while this wasn’t the focus of her research (she was actually interested in the role of emotion in mediating learning), it pointed to the need for data to not only back up claims, but also to lead to explanations for surprising results and open areas for further study.

That’s what we’re working for: that rigorously examining these and all sorts of other learning opportunities becomes an integral part of the “time-honored tradition.”