Yesterday Shawn and I met with Jess Park at Portland Art Museum (PAM) about an exciting new evaluation project utilizing our looxcie cameras. We had some great conversation about how to capture visitor conversation and interactions in relation to PAM’s Museum Stories and Conversations About Art video-based program. The project will be one the first official evaluation partnerships we have developed under the flag of FCL lab!

PAM has developed these video-based experiences for visitors in order to deepen visitors’ engagement with objects, with each other, and with the museum.  Museum Stories features short video presentations of museum staff talking about specific objects in the collection that have some personal meaning for them. All videos are available on touch screen computers in one gallery of the museum, which also houses the areas where the stories are recorded as well as some of the objects from the museum featured in the stories.  These videos are also available on-line.  Conversations about Art is a series of short videos featuring conversations among experts focused on particular objects in the museum’s collection.  These are available on hand-held devices provided by the museum, as downloads to visitors’ personal hand-held devices, and on the museum website. PAM is now looking to expand the program, and wishes to document some of the predicted and unexpected impacts and outcomes of these projects for visitors. The evaluation will recruit visitors to wear the looxcie cameras during their visit to the pertinent exhibits, including that of object stories. We will likely also be interviewing some of the experts/artists involved in creating the  videos.

We spent time going over the looxcie technologies and how best to recruit visitors in the Art Museum space. We also created some test clips to help the PAM folks working on the evaluation better understand the potential of the video data collection process. I will post a follow up next week with some more details about how we’re using the looxcies.

Shawn and I come back from PAM feeling like the A-Team – we love it when an evaluation plan comes together.

On this most summer of holidays, while her home state experiences powerful summer storms and heat waves, intern Diana adjusts to her summer home and job:

As a native Marylander, I have been thrown into an environment of cold northwest water and weather.  I was definitely not used to wearing pants in the summer or having my hooded sweatshirt as a necessity to my wardrobe.

The first challenge I faced was understanding how the west coast worked in terms of upwelling and the cold temperature of the water here.  Once I understood this, I could then understand why the biodiversity that lives and flourishes here can actually do so. I am still learning, and I probably always will be for at least this summer if not more, because the Oregon coast is a complex world.

The next step to fitting in here at Hatfield for the summer was to learn about the Visitor’s Center itself. I had to learn about the animals that live here, the activities and free choice learning aspects that are displayed as well as what my project for the summer here would be.  That is a frustrating task in itself. I do know a good bit about marine biology and ecology, but this place was intense. This is mainly because I have only seen a few science centers and aquariums that use the water around them as their water for the marine animals. Hatfield completely relies on the bay its saltwater wedge. If something happens to the water in the bay, then all heck breaks loose in the science center because that’s the water we use. I know it’s filtered a million times in many different ways, but sometimes things still make it through and that’s what effects the marine environment such as bacteria, invertebrates, etc.

Then, there are the surprises I have gotten while working at this job for almost 2 weeks…the Visitors.  No matter how many changes in the center, from the animals to the water quality to the behavior, the visitors still surprise me the most. Each family and person is different, from the moment they walk in the door and are asked for a donation rather than an entrance fee. Some give a little, some give and wish they could give more. There are people who are from out of town who just want to see the octopus and people from landlocked states and have never seen an estuary before. You also get visitors who know nothing about the Oregon coast or marine ecology. Then, before you know it there’s a kid who comes in and knows more about sea stars than you would ever know, no matter how much you studied. Each visitor has their own story, and that is what makes my job so exciting because not only is science ever changing, but so are the people that want to learn.

Prototyping describes the process of creating a first-version exhibit, then testing it out with visitors, and redesigning. Often, we iterate this several times, depending on monetary and time budgets. It’s usually a fruitful way to find out not only what buttons confuse people, but also what they enjoy playing with and what great ideas totally bomb with users.

The problem with prototyping, as with many data collection processes, is that you have to ask the right questions to get useful answers. We are currently re-developing an interactive about how scientists use ocean data to make predictions about salmon populations for future harvests. The first round surveys revealed some areas of content confusion and some areas of usability confusion. Usability confusion is easy to re-work usually, but content confusion is harder to resolve, especially if your survey questions were confusing to the visitors.

This was unfortunately the case with the survey I made up, despite a few rounds of re-working it with colleagues. The survey had multiple-choice questions which were fairly straightforward, but it was the open-ended questions that tripped people up, making the results a bit harder to interpret and know what to do with. The moral of the story? Prototype (a.k.a. pilot) your survey, too!

Harrison used an interesting choice of phrase in his last post: “time-tested.” I was just thinking as I watched the video they produced, including Bill’s dissection, that I don’t know what we’ve done to rigorously evaluate our live programming at Hatfield. But it is just this sort of “time-tested” program that our research initiatives are truly trying to sort out and put to the test. Time has proven its popularity, data is necessary to prove its worth as a learning tool. A very quick survey of the research literature doesn’t turn up much, though some science theater programming was the subject of older studies. Live tours are another related program that could be ripe for investigation.

We all know, as humans who recognize emotions in others, how much visitors enjoy these sorts of programs and science shows of all types. However, we don’t always apply standards to our observations, such as measuring specific variables to answer specific questions. We have a general sense of “positive affect” in our visitors, but we don’t have any data in the form of examples of quotes or interviews with visitors to back up our thoughts. Yet.

A good example of another need for this was in a recent dissertation defense here at OSU. Nancy Staus’ research looked at learning from a live program, and she interviewed visitors after watching a program at a science center. She found, however, that the presenter of the program had a lot of influence on the learning simply by the way they presented the program: visitors recalled more topics and more facts about each topic when the presentation was more interactive than scripted. She wasn’t initially interested in differences of this sort, but because she’d collected this sort of data on the presentations, she was able to locate a probable cause for a discrepancy she noted. So while this wasn’t the focus of her research (she was actually interested in the role of emotion in mediating learning), it pointed to the need for data to not only back up claims, but also to lead to explanations for surprising results and open areas for further study.

That’s what we’re working for: that rigorously examining these and all sorts of other learning opportunities becomes an integral part of the “time-honored tradition.”

Rejection. It’s an inevitable part of recruiting human subjects to fill out your survey or try out your exhibit prototype. It’s also hard not to take it personally, but visitors have often paid to attend your venue today and may or may not be willing to sacrifice some of their leisure time to improve your exhibit.

 

[Full disclosure: this blog post is 745 words long and will take you approximately 5-10 minutes to read. You might get tired as you read it, or feel your eyes strain from reading on the computer screen, but we won’t inject you with any medications. You might learn something, but we can’t pay you.]

 

First, you have to decide beforehand which visitors you’re going to ask – is it every third visitor? What if they’re in a group? Which direction will they approach from? Then you have to get their attention. You’re standing there in your uniform, and they may not make eye contact, figuring you’re just there to answer questions. Sometimes rejection is as simple as a visitor not meeting your eye or not stopping when you greet them.

You don’t want to interrupt them while they’re looking at an exhibit, but they may turn and go a different direction before you get a chance to invite them to help you. How far do you chase them once you’ve identified them as your target group? What if they’re going to the restrooms, or leaving the museum from there? When I was asking people to complete surveys about our global data display exhibit, they were basically on their way out the door of the Visitor Center, and I was standing in their way.

 

If you get their attention, then you have to explain the study and not scare them off by making it sound like a test, with right or wrong answers, even when you have right and wrong answers. You also have to make sure that you don’t take too much of their time.

 

Then there are the visitors who leave in the middle of the experiment, deciding they didn’t know what they were getting into, or being drawn away by another group member.

 

Oh, you’re still there? This isn’t too long? It’s not lunchtime, planetarium show time or time to leave for the day? I’ll continue.

 

If you have an IRB or other informed consent document, this can be another hurdle. If you’re not careful about what you emphasize, visitors could focus on the “Risks” section that you must tell them about. In exhibit evaluation and research, this is often only fatigue or discomfort when someone feels they don’t know the right answer (despite assurances that no one is judging them). But of course, you have to be thorough and make sure they do understand the risks and benefits, who will see the information they give and how it will be used. Luckily, we don’t often need to collect personal information, even signatures, if we’re not using audio or video recording.

 

Then there is the problem of children. We want to assess the visit with the true types of groups that we see, that is, mostly families or mixed adult-child groups. However, anyone under 18 needs to have consent given by a parent. Unfortunately, a grandparent, aunt, uncle, sister or brother doesn’t count, so you have to throw out those groups as well. Even if a parent is present, you have to make sure that you can explain the research to the youngest visitor you have permission to study (usually about 8 years old) and even worse, explain the assent process to him or her without scaring them off. As our IRB office puts it, consent is a process, a conversation, not just a form.

 

So who knows if we’re really truly getting a representative sample of our visitors? That’s definitely a question about sampling theory. Luckily for us at Hatfield, we’re working with our campus IRB office to try and create less-restrictive consent situations, as when we don’t have to get a signed consent form if that’s the only identifying information we ask visitors to provide. Maybe we’ll be able to craft a situation where over-18 family members will be able to provide consent for their younger relatives if a parent didn’t travel with them that day. Luckily, as this progresses, you’ll be able to follow it on our blog.

 

Wow, you’ve read this far? Thank you so much, and enjoy the rest of your visit.

 

 

Beverly Serrell, a pioneer in tracking museum visitors (or stalking them, as some of us like to say), has just released a nice report on the Center for the Advancement of Informal Science Education (CAISE) web site. In “Paying More Attention to Paying Attention,” Serrell describes the growing use of metrics she calls tracking and timing (T&T) in the museum field since the publication of her book on the topic in 1998. As the field has more widely adopted these T&T strategies, Serrell has continued her work doing meta-analysis of these studies and has developed a system to describe some of the main implications of the summed findings for exhibition design.

I’ll leave you to read the details, but it really drove home to me the potential excitement and importance of the cyberlab’s tracking setup. Especially for smaller museums that have minimal staff, implementing an automatic tracking schemes, even on a temporary basis, could save a lot of person-hours in collecting this simple, yet vital data about exhibition and exhibit element use. It could allow more data collection of this type in the prototyping stages, especially, which might yield important data on the optimum density of exhibit pieces before a full exhibition is installed. On the other hand, if we can’t get it to work, or our automated design proves ridiculously unwieldy (stay tuned for some upcoming posts on our plans for 100 cameras in our relatively-small 15000 square foot space), it will only affirm the need for good literal legwork that Serrell also notes is a great introduction to research for aspiring practicioners. In any case, the eye tracking as an additional layer of information that we use to help explain engagement and interest in particular exhibit pieces might lead eventually to a measure that lends more insight into Serrell’s Thorough Use.

(Thanks to the Museum Education Monitor and Jen Wyld for the tip about this report.)