Normally, once the majority of undergrads have finished their third term finals and graduation is a teary memory, there is a calm that overcomes campus that those of us here year-round have come to expect. Don’t get me wrong, the undergrads are the reason for this institution, but there are an awful lot of them (and more each year).

However, this year, I’m in a bit of a pickle. My study is specifically trying to target the general adult public, that is, those with a high-school degrees but maybe not a lot of undergraduate experience. At Hatfield, we generally have a slightly higher-educated population than I need. Also, my experiment takes an hour, so the casual visitor is unlikely to break from their group for that long. And my local campus population, at least my local congregation of said population, has just skedaddled for the summer.

So I’m busy trying to find anywhere those remaining might be: the very few residence halls that remain open (which also requires navigating some tricky flier-posting rules), the occasionally-open dining halls, lecture halls which might still have classes and bulletin boards nearby and the library, MU, and some local eateries. I’m also heading to the community college where many students are cross-registered and where they might be knocking out some core courses during the summer on the cheap(er). Hm, maybe I should post a flier at the local store where my incentive gift cards are from? In truth, this is really still just a convenience sample, as I am not plastering fliers all over Corvallis, let alone anywhere else in the state or country. At least, not at this point …

Any ideas welcome! Where do you go to find your research subjects?

I’ve started trying to recruit subjects, including first-year undergraduates and oceanography faculty, for my dissertation work. And it’s a slooooow process, partly due to poor timing, partly due to everyone being busy.

It’s poor timing because it’s finals week here at OSU. Not only are students consumed (rightly) by those end-of-term tests, it’s finals right before summer break. So on top of everything, people are scattering to the four winds, frantically moving out of apartments, and scrambling to start summer jobs or summer classes. So, I’m being patient and trying to think creatively about recruiting a fairly broad, random sample of folks. So far I’m posting flyers in classroom buildings and dining and residence halls, but my next step may be standing in front of the library or student union with all the folks trying to get initiatives on the ballot for November.

The faculty are another story, of course. I can easily get contact information and even fairly detailed information about their experience with visualizations. However, they are also quickly scattering for summer research cruises – two I called just today are leaving early next week for 10 days to 3 weeks. Luckily, I got them to agree to participate when they get back. I’m still getting several brush offs of “too busy.” So my tactic to fight back here is to find other professors they know who can make me less of a ‘generic’ grad student and thus somewhat harder to say no to.

All in all, it’s rejection the same as in exhibit evaluation, just with different excuses.

Stay tuned!

We heard recently that our developer contractors have decided they have to abandon their efforts to make the first facial recognition system they investigated work. It was a tough call; they had put a lot of effort into it, thinking many times if they could just tweak this and alter that, they would get better performance than 60%. Alas, they finally decided it was not going to happen, at least without a ridiculous amount of further effort for the eventual reward. So, they are taking a different tack, starting over, almost, though they have lots of lessons learned from the first go-round.

I think this indecision about when it makes sense to try and fix the leaking ship vs. abandon ship and find another is a great parallel with exhibit development. Sometimes, you have a great idea that you try with visitors, and it flops. You get some good data, though, and see a way you can try it again. You make your changes. It flops again, though maybe not quite as spectacularly. Just enough better to give you hope. And so on … until you have to decide to cut bait and either redesign something for that task entirely or, if you’re working with a larger exhibition, find another piece to satisfy whatever learning or other goals you had in mind for the failed piece.

In either situation, it’s pretty heartbreaking to let go of all that investment. When I first started working in prototyping, this happened to our team designing the Making Models exhibition at the Museum of Science, Boston. As an intern, I hadn’t invested anything in the failed prototype, but I could see the struggle in the rest of the team, and it made such an impression that I recall it all these years later. Ultimately, the final exhibit looks rather different from what I remember, but its success is also a testament to the power of letting go. Hopefully, we’ll eventually experience that success with our facial recognition setups!

Our actual eyetracker is a bit backordered, so we’ve got a rental for the moment. It’s astoundingly unassuming looking, just (as they picture on their web site) basically a small black bar at the bottom of a 22” monitor, plus the laptop to run the programs. When I took it out of the box, it fires up the operating system and there are the icons just sitting on the desktop, with a little warning that we shouldn’t mess with any settings, install a firewall or anti-virus software for risk of messing up the primary function. They have branded the screen with a little decal from their company, but otherwise, it’s just a laptop with an attached monitor.

 

The actual getting started is a bit complicated.  I’m usually the one to pooh-pooh the need for “readme” documents, but I would have liked one here to tell me which program is which. That’s the thing – the software is powerful, but it has a bit of a steep learning curve. The “quick start” guide has several steps before you even think about calibrating a subject. We got stuck on the requirement to get Ethernet hooked up since we tried to set up in a tech closet and OSU otherwise has pretty widespread wireless coverage. Harrison had to run a 50’ cable from Mark’s office down the hallway to the closet.

 

Looks like the next step is some pretty intense work understanding how to set up an experiment in a different software program. This is where a “test” experiment just to start learning how to use the system would be good. That’s the sort of icon I need in the middle of the desktop. It reminds me of my first job as a research assistant, where I was registering brain images to a standard. The researchers had written a program to rotate the images to line up and then match certain features to the standard to stretch or compact the images as necessary, but there was no manual or quick start. My supervisor had to show me all the steps, what button did what, which order, etc. It was a fairly routine process, but it was all kept in someone’s head until I wrote it down. The pdfs here are a great start, but there still seems to be a step missing. Stay tuned!

 

Prototyping describes the process of creating a first-version exhibit, then testing it out with visitors, and redesigning. Often, we iterate this several times, depending on monetary and time budgets. It’s usually a fruitful way to find out not only what buttons confuse people, but also what they enjoy playing with and what great ideas totally bomb with users.

The problem with prototyping, as with many data collection processes, is that you have to ask the right questions to get useful answers. We are currently re-developing an interactive about how scientists use ocean data to make predictions about salmon populations for future harvests. The first round surveys revealed some areas of content confusion and some areas of usability confusion. Usability confusion is easy to re-work usually, but content confusion is harder to resolve, especially if your survey questions were confusing to the visitors.

This was unfortunately the case with the survey I made up, despite a few rounds of re-working it with colleagues. The survey had multiple-choice questions which were fairly straightforward, but it was the open-ended questions that tripped people up, making the results a bit harder to interpret and know what to do with. The moral of the story? Prototype (a.k.a. pilot) your survey, too!

Harrison used an interesting choice of phrase in his last post: “time-tested.” I was just thinking as I watched the video they produced, including Bill’s dissection, that I don’t know what we’ve done to rigorously evaluate our live programming at Hatfield. But it is just this sort of “time-tested” program that our research initiatives are truly trying to sort out and put to the test. Time has proven its popularity, data is necessary to prove its worth as a learning tool. A very quick survey of the research literature doesn’t turn up much, though some science theater programming was the subject of older studies. Live tours are another related program that could be ripe for investigation.

We all know, as humans who recognize emotions in others, how much visitors enjoy these sorts of programs and science shows of all types. However, we don’t always apply standards to our observations, such as measuring specific variables to answer specific questions. We have a general sense of “positive affect” in our visitors, but we don’t have any data in the form of examples of quotes or interviews with visitors to back up our thoughts. Yet.

A good example of another need for this was in a recent dissertation defense here at OSU. Nancy Staus’ research looked at learning from a live program, and she interviewed visitors after watching a program at a science center. She found, however, that the presenter of the program had a lot of influence on the learning simply by the way they presented the program: visitors recalled more topics and more facts about each topic when the presentation was more interactive than scripted. She wasn’t initially interested in differences of this sort, but because she’d collected this sort of data on the presentations, she was able to locate a probable cause for a discrepancy she noted. So while this wasn’t the focus of her research (she was actually interested in the role of emotion in mediating learning), it pointed to the need for data to not only back up claims, but also to lead to explanations for surprising results and open areas for further study.

That’s what we’re working for: that rigorously examining these and all sorts of other learning opportunities becomes an integral part of the “time-honored tradition.”