Rejection. It’s an inevitable part of recruiting human subjects to fill out your survey or try out your exhibit prototype. It’s also hard not to take it personally, but visitors have often paid to attend your venue today and may or may not be willing to sacrifice some of their leisure time to improve your exhibit.

 

[Full disclosure: this blog post is 745 words long and will take you approximately 5-10 minutes to read. You might get tired as you read it, or feel your eyes strain from reading on the computer screen, but we won’t inject you with any medications. You might learn something, but we can’t pay you.]

 

First, you have to decide beforehand which visitors you’re going to ask – is it every third visitor? What if they’re in a group? Which direction will they approach from? Then you have to get their attention. You’re standing there in your uniform, and they may not make eye contact, figuring you’re just there to answer questions. Sometimes rejection is as simple as a visitor not meeting your eye or not stopping when you greet them.

You don’t want to interrupt them while they’re looking at an exhibit, but they may turn and go a different direction before you get a chance to invite them to help you. How far do you chase them once you’ve identified them as your target group? What if they’re going to the restrooms, or leaving the museum from there? When I was asking people to complete surveys about our global data display exhibit, they were basically on their way out the door of the Visitor Center, and I was standing in their way.

 

If you get their attention, then you have to explain the study and not scare them off by making it sound like a test, with right or wrong answers, even when you have right and wrong answers. You also have to make sure that you don’t take too much of their time.

 

Then there are the visitors who leave in the middle of the experiment, deciding they didn’t know what they were getting into, or being drawn away by another group member.

 

Oh, you’re still there? This isn’t too long? It’s not lunchtime, planetarium show time or time to leave for the day? I’ll continue.

 

If you have an IRB or other informed consent document, this can be another hurdle. If you’re not careful about what you emphasize, visitors could focus on the “Risks” section that you must tell them about. In exhibit evaluation and research, this is often only fatigue or discomfort when someone feels they don’t know the right answer (despite assurances that no one is judging them). But of course, you have to be thorough and make sure they do understand the risks and benefits, who will see the information they give and how it will be used. Luckily, we don’t often need to collect personal information, even signatures, if we’re not using audio or video recording.

 

Then there is the problem of children. We want to assess the visit with the true types of groups that we see, that is, mostly families or mixed adult-child groups. However, anyone under 18 needs to have consent given by a parent. Unfortunately, a grandparent, aunt, uncle, sister or brother doesn’t count, so you have to throw out those groups as well. Even if a parent is present, you have to make sure that you can explain the research to the youngest visitor you have permission to study (usually about 8 years old) and even worse, explain the assent process to him or her without scaring them off. As our IRB office puts it, consent is a process, a conversation, not just a form.

 

So who knows if we’re really truly getting a representative sample of our visitors? That’s definitely a question about sampling theory. Luckily for us at Hatfield, we’re working with our campus IRB office to try and create less-restrictive consent situations, as when we don’t have to get a signed consent form if that’s the only identifying information we ask visitors to provide. Maybe we’ll be able to craft a situation where over-18 family members will be able to provide consent for their younger relatives if a parent didn’t travel with them that day. Luckily, as this progresses, you’ll be able to follow it on our blog.

 

Wow, you’ve read this far? Thank you so much, and enjoy the rest of your visit.

 

 

How do we analyze and study something familiar and taken for granted?  How do we take account of the myriad modes of communication and media that are part of practically everything that we do, including learning? One of the biggest challenges we face studying learning (especially in a museum) is documenting meaningful aspects of what people say and do while also taking into account the multiple, nested contexts that help make sense of what we have documented.  As a growing number of researchers and theorists worldwide have begun to document, understanding how these multiple modes of communication and representation work to DO everyday (and not so everyday) activities, requires a multimodal approach that often sees any given social interaction as a nexus (a meeting point) of multiple symbol systems and contexts, some of which are more active and salient (foregrounded) at any given moment by participants or by researchers.

This requires researchers to have ways of capturing and making sense of how people use language, gesture, body position, posture and objects as part of communicating with one another – and for learning researchers it means understanding how all of these ways of communicating contribute to or get in the way of thinking and learning. One of the most compelling ways of approaching these problems is through what has come to be called a multimodal discourse analysis (MMDA).

MMDA gives us tools and techniques for looking at human interactions that take into account how these multiple modes of communication are employed and deployed in everyday activities.  It also supports our tackling the issue of how context drives meaning of talk and actions and how talk and actions can invoke and change contexts.  It does this by acknowledging that the meanings of what people say and do are not prima facie evident, but require the researcher to identify and understand salient contexts within which a particular gesture, phrase, or facial expression makes sense.  We are all fairly fluent and deploying and decoding these cues of communication, and researchers often get quite good at reading them from the outside. But how does one teach an exhibit to read them accurately? Which ones need to be recognized and recorded in the database that drives an exhibit or feeds into a researchers queries?

Over the next several months, we’ll be working out answers to these questions and others that will undoubtedly arise as we get going on data collection and analysis.  We are fortunate to have some outstanding help in this regard.  Dr. Sigrid Norris, Director of the Multimodal Research Centre at the Auckland University of Technology and Editor of the journal Multimodal Communication, is serving as an advisor for the project.  We’re also planning to attend the 6th International Conference on Multimodality this August in London to share what we are up to and learn from leaders in MMDA from around the world.

 

Beverly Serrell, a pioneer in tracking museum visitors (or stalking them, as some of us like to say), has just released a nice report on the Center for the Advancement of Informal Science Education (CAISE) web site. In “Paying More Attention to Paying Attention,” Serrell describes the growing use of metrics she calls tracking and timing (T&T) in the museum field since the publication of her book on the topic in 1998. As the field has more widely adopted these T&T strategies, Serrell has continued her work doing meta-analysis of these studies and has developed a system to describe some of the main implications of the summed findings for exhibition design.

I’ll leave you to read the details, but it really drove home to me the potential excitement and importance of the cyberlab’s tracking setup. Especially for smaller museums that have minimal staff, implementing an automatic tracking schemes, even on a temporary basis, could save a lot of person-hours in collecting this simple, yet vital data about exhibition and exhibit element use. It could allow more data collection of this type in the prototyping stages, especially, which might yield important data on the optimum density of exhibit pieces before a full exhibition is installed. On the other hand, if we can’t get it to work, or our automated design proves ridiculously unwieldy (stay tuned for some upcoming posts on our plans for 100 cameras in our relatively-small 15000 square foot space), it will only affirm the need for good literal legwork that Serrell also notes is a great introduction to research for aspiring practicioners. In any case, the eye tracking as an additional layer of information that we use to help explain engagement and interest in particular exhibit pieces might lead eventually to a measure that lends more insight into Serrell’s Thorough Use.

(Thanks to the Museum Education Monitor and Jen Wyld for the tip about this report.)

 

One of the key techniques in museum and free-choice learning evaluation and research is the idea of visitor or user observation by the staff. When we’re trying to observe them in a “natural” state, and figure out how they are really behaving, for example. We call this observation unobtrusive. The reality is we are rarely so discreet. How do you convince regular visitors that that staff member wearing a uniform and scribbling on a clipboard near enough to eavesdrop on you is not actually stalking them? You don’t, that is, until you turn to a lot of technological solutions as our new lab will be doing.

We’ve spent a lot of hours dreaming up the way this new system is going to work and trying to make the observations of speech, actions, and more esoteric things like interests and goals hidden to the visitor’s eye. Whether we succeed or not will be the subject of many new sets of evaluations with our three new exhibits and more. Laura’s looxcie-based research will be one of these.

Over the years we’ve gathered lots of data about what people tend to do when either they truly don’t know they’re being watched, or they don’t care. In addition, we’ve gathered some ideas of how visitors react to the idea of participating in our studies, from flat-out asking us what we’re trying to figure out, to just giving us the hairy eyeball and perhaps skipping the exhibit we’re working on. A lot of these turn into frustrations for the staff as we must throw out that subject, or start our counting for randomization over. So as we go through the design process, I’m going to share some of my observations of myself gathering visitor data through observations and surveys. These two collection tools are ones we hope to readily automate to the benefit of both the visitors who feel uncomfortable under obvious scrutiny and the researcher who suffers the pangs of rejection.

I’ve been looking in to technologies that help observe a free choice learning experience from the learner perspective. My research interests center on interactions between learners and informal educators, and so I wanted a technology that helped record interactions from the learner perspective, but were as least obtrusive on that interaction as possible.

Originally I was interested in using handheld technologies (such as smartphones) for this task. Here the idea was to have the learner wear a handheld on a lanyard which would automatically tag and record their interactions with informal educators via QR codes or augmented reality symbols. However, this proved more complicated than originally thought (and produced somewhat dodgy video recordings!), so we looked for a simpler approach.

I am currently exploring how Bluetooth headsets can help this process. The “looxcie” is basically a Bluetooth headset equipped with a camera, which can be paired to a handheld device for recording or work independently. Harrison is expertly modeling this device in the photos. I am in the process of starting to pilot this technology in the visitor center, and have spent some time with the volunteer interpreters at HMSC demonstrating how this might be used for my research. Maureen and Becca helped me produce a test video at the octopus tank (link below).