The Free-Choice Learning Laboratory at HMSC

Informal science education research

The Free-Choice Learning Laboratory at HMSC

Archives for Visitors

“Spring Cleaning” in the Cyberlab

Spring Quarter is now upon us and with that there is plenty of “spring cleaning” to get done in the Cyberlab prior to the surge of visitors to Newport over the summer months.  For a free-choice learning geek like me, this period of data collection will be exciting as I work on my research for my graduate program.

The monitoring and maintenance of the audio and video recording devices continues!  Working with this technology is a great opportunity to troubleshoot and consider effective placement around exhibits.  I am getting more practice with camera installation and ensuring that data is being recorded and archived on our servers.  We are also thinking about how we can rapidly deploy cameras for guest researchers based on their project needs.  If other museums, aquariums, or science centers consider a similar method to collect audio and video data, I know we can offer insight as we continue to try things and re-adjust.  At this point I don’t take these collection methods for granted!  Reading through published visitor research projects, there was consideration for how to minimize the effect of an observer or a large camera recording nearby and how this influenced behavior.  Now cameras are smaller and can be mounted in ways that they blend in with the surroundings.  This helps us see more natural behaviors as people explore the exhibits.  This is important to me because I will be using the audio and video equipment to look for patterns of behavior around the multi-touch interactive tabletop exhibit.

Based on comments from our volunteers, the touchtable has received a lot of attention from visitors.  At this time we have a couple different programs installed on the table.  One program from Open Exhibits has content about the electromagnetic spectrum where users can drag an image of an object through the different sections of the spectrum, including infrared, visible, ultraviolet, and x-ray, while providing information about each category.  Another program is called Valcamonica, which has puzzles and content about prehistoric petroglyphs found in Northern Italy.  I am curious as to the conversations people are having around the table and whether they are verbalizing the content they see or how to use the technology.  If there are different ages within the group, is someone taking the role as the “expert” on how to use it?  Are they modeling and showing others how to navigate through the software?  Are visitors also spending time at other exhibits near the table?  There are live animal exhibits within 15 feet of the table and are they getting attention?  I am thinking about all of these questions as I design my research project that will be conducted this summer.  Which means…time to get back to work!

Making Meaning Making Personal

Meaning making is an idea that seems to resonate with lots of people studying learning or creating contexts for learning.  We want visitors or students to make meaning of their experiences.  As a construct, meaning making seems to be a way to capture the active elements of learning as well as the uniqueness of each learner’s prior experience and knowledge and the open ended nature of free-choice learning experiences in general.

But what do we really mean by meaning making?  And how should we approach operationalizing it for research? For Vygotsky, meaning had two components – meaning proper and personal sense.  The component of meaning in Vygotsky’s work focuses attention on the shared, distributed, what Bakhtin would call repeatable, and “public” denotations of a word, gesture, action or event.  This is largely the aspect of meaning making that researchers have in mind when they are thinking about education. This approach to meaning encourages researchers to ask whether the students and learners are making the “right” meaning? Are the meanings that they are making recognizable and shareable with us, with more expert others, and with each other? Are they getting the content and ideas and concepts right? But this shared, public aspect is only a part of the whole of meaning that person makes.

For Vygotksy and generations of Activity Theorists, a more primary aspect of this shared, public, testable, and authoritative meaning is personal sense.  The construct of personal sense attempts to capture the very personal, biographical, embodied, situated connotations of words, gestures, actions and events. This is the realm of what those things mean for us as part of our personal narratives about ourselves, our experiences, sense of place or even sense of ourselves.  It is about how they resonate (or not) with our values, beliefs, judgments and knowledge.  As learning researchers, we often discount or ignore this hugely important aspect of meaning making, and yet when people visit a museum or learn something new, this element of personal sense may be in the forefront of the experience.  The realm of personal sense is where emotional experiences get burned into memory, where motivations and identities are negotiated, tried on, and appropriated or rejected. This is also the realm where we need the most help from learners as co-researchers.  We can measure and document the meaning aspect of their meaning making relatively easily, but we rely on them to report about the personal sense they are making. As researchers, we should add to our documenting of the development of accurate and sharable meaning and develop serious ways to embrace the notion of reflection instead. Experiences that support meaning making as personal sense making are effective in supporting the overall learning process because they are essentially reflective.

What kinds of dialogues with learners most support that reporting are an open question to me right now.  I’d welcome ideas here!

Promoting Participants as Co-researchers

Last week I wrote about Bakhtin’s idea that in order to put together a real, full research account, the researcher point of view has to be put in dialogue with the point of view of the participant in research.  Neither point of view is complete in and of itself.  The question I raised was how do we make sure and include the voices of research subjects in our work such that they are co-researchers with us and help create those fuller research accounts of experience.  One of the primary tools for engaging in shared research used in professional development of educators is video.  When we video our practice as educators and (re)view it with others, we create the possibility of real dialogue among multiple points of view.  My own experience working with classroom teachers and museum educators, floor staff, and volunteer interpreters using video to reflect on experience has convinced me that neither my outsider observations nor their reflective writing have been sufficient to create real dialogic relationships where we become co-researchers.  In some cases, overarching cultural and social narratives about teachers and learners inevitably drown out the details of their experiences as they experienced them. In other cases, the details of those experiences defy categorization and reflection.

As one example, in one project to develop a professional learning community among veteran K-10 teachers, observations showed very little evidence of student led inquiry, but teacher narratives about their teaching reported detailed regular use of student-centered science inquiry techniques as part of their normal routines.  Having teachers observe each other using a researcher-generated rubric did little to change their assertions about their teaching even though they were directly contradicted by the observational evidence.  Similarly, in multiple projects with museum educators, those educators report a basic belief that visitors do not read labels.  Putting these educators in the position of researchers observing visitors generated copious examples of visitors reading labels, yet educator narratives about visitors consistently fail to include that reading. The data and observations simply don’t stick and are overwhelmed by other kinds of details or by larger-scale institutional narratives about visitor behavior.

In both instances, we eventually turned to video as a way of creating what we hoped would be shared texts for analysis and reflection.  Yet, the existence of video itself as a shared text is also not enough to form the grounds for researchers and participants to become co-researchers.  Watching video and talking about it, even using a rubric to analyze it definitely helps educators be more reflective about their experiences and to put them in larger contexts than the overarching narratives we tend to fall back on.  But there still seems to be a missing step.

For Bakhtin the missing step seems to engaging in co-authorship to create some kind of new text or new representation of or about that experience.  When we watch video and reflect on it with each other, educators and researchers both come away with a stronger shared sense of what’s happening, but in the absence of creating some kind of new shared text or representation, we don’t have the opportunity for truly developing as co-researchers.  Are there places and projects beyond video that we can do on the museum floor that will help visitors (re)create, write about, or otherwise represent their experiences with us as co-authors?

Agile research in action!

Members of the Cyberlab were busy this week.  We set up the multi touch table and touch wall in the Visitors Center and hosted Kate Haley Goldman as a guest researcher.  In preparation for her visit, there were modifications to camera and table placement, tinkering with microphones, and testing the data collection pieces by looking at the video playback.  It was a great opportunity to evaluate our lab setup for other incoming researchers and their data collection needs, and to try things live with the technology of Ideum!

Kate traveled from Washington D.C. to collect data on the interactive content by Open Exhibits displayed on our table.  As the Principal of Audience Viewpoints, Kate conducts research on audiences and learning in museums and informal learning centers.  She is investigating the use of multi touch technology in these settings, and we are thankful for her insight as we implement this exhibit format at Hatfield Marine Science Center.

Watching the video playback of visitor interactions with Kate was fascinating.  We discussed flow patterns around the room based on table placement.  We looked at the amount of stay time at the table depending on program content.  As the day progressed, more questions came up.  How long were visitors staying at the other exhibits, which have live animals, versus the table placed nearby?  While they were moving about the room, would visitors return to the table multiple times?  What were the demographics of the users?  Were they bringing their social group with them?  What were the users talking about?  Was it the technology itself or the content on the table?  Was the technology intuitive to use?

I felt the thrill of the research process this weekend.  It was a wonderful opportunity to “observe the observer” and witness Kate in action.  I enjoyed seeing visitor use of the table and thinking about the interactions between humans and technology.  How effective is it to present science concepts in this format and are users learning something?  I will reflect on this experience as I design my research project around science learning and the use of multi touch technology in an informal learning environment such as Hatfield Marine Science Center.

Twitter as a #CulturalTool

One week ago I was not a Twitter user. After hearing about it for years and seeing other people use it, I wasn’t convinced it was a tool for me. I personally have problems communicating in 140 characters or less (mainly because I don’t usually put a limit on myself) and I think Twitter has changed language use. We see words not being capitalized, the use of numbers where letters should be, an insane amount of shorthand, and #somanyhashtags I can’t #decipher what someone’s actually #tryingtocommunicate.

And then I heard this story on NPR, which claims that Twitter can boost literacy. And I got to thinking, am I just uncomfortable with Twitter because I haven’t fully immersed myself in the experience? Is there something to it that I’m missing? So on Monday, I created an account (@mamileham) to see how this cultural tool is used and what it means for us as researchers of free-choice learning.

Twitter is a cultural tool that’s here to stay.  It allows people to connect and communicate in a way like never before. As this video says, “you wouldn’t send an email to a friend to tell them you’re having coffee. Your friend doesn’t need to know that.” But what if someone is truly interested in the little things? With people connecting (@) and mentioning (#) where they are and what they’re doing, we can follow and understand what they are experiencing and possibly how they’re evaluating and making sense of the world.  With Twitter, the video says, “[people can] see life between blog posts and emails.” What if we could see the meaning making (in almost real time) between entering and exiting a museum based on an individual’s tweets?

I’m not completely sold on Twitter boosting literacy, but I do understand how we are using social media to share information, find information, think about who we are (i.e., identity formation), and that tweeting is a new language. You have to learn and then know how to use the @ and # but maybe it’s worth learning. However, think about how all those #hashtags sound when used in real life.

Good days and bad days for visitor data collection

Awhile ago, I promised to share some of my experiences in collecting data on visitors’ exhibit use as part of this blog. Now that I’ve actually been back at it for the past few weeks, I thought it might be time to actually share what I’ve found. As it is winter here in the northern hemisphere, our weekend visitation to the Hatfield Visitor Center is generally pretty low. This means I have to time my data collection carefully if I don’t want to spend an entire day waiting for subjects and maybe only collect data on two people. That’s what happened on a Sunday last month; the weather on the coast was lovely, and visitation was minimal. I have been recently collecting data in our Rhythms of the Coastal Waters exhibit, which has additional data collection challenges in that it is basically the last thing people might see before they leave the center, it’s dim because it houses the projector-based Magic Planet, and there are no animals, unlike just about every other corner of the Visitor Center. So, I knocked off early and went to the beach. Then I definitely rescheduled another day I was going to collect data because it was a sunny weekend day at the coast.

On the other hand, on a recent Saturday we hosted our annual Fossil Fest. While visitation was down from previous years, only about 650 compared to 900, this was plenty for me, and I was able to collect data on 13 people between 11:30 and 3:30, despite an octopus feeding and a lecture by our special guest fossil expert. Considering data collection, including recruitment, consent, the experiment, and debrief probably runs 15 minutes, I thought that this was a big win. In addition, I only got one refusal from a group that said they were on their way out and didn’t have time. It’s amazing how much better things go if you a) lead with “I’m a student doing research,” b) mention “it will only take about 5-10 minutes”, and c) don’t record any video of them. I suspect it also helps that it’s not summer, as this crowd is more local and thus perhaps more invested in improving the center, whereas summer tourists might be visiting more for the experience, to say they’ve been there, as John Falk’s museum visitor “identity” or motivation research would suggest. This would seem to me like a motivation that would not make you all that eager to participate. Hm, sounds like a good research project to me!

Another reason I suspect things went well was that I am generally approaching only all-adult groups, and I only need one participant from each group, so someone can watch the kids if they get bored. I did have one grandma get interrupted a couple times, though, by her grandkids, but she was a trooper and shooed them away while she finished. When I was recording video and doing interviews about the Magic Planet, the younger kids in the group often got bored, which made recruiting families and getting good data somewhat difficult, though I didn’t have anyone quit early once they agreed to participate. Also, as opposed to prototyping our salmon forecasting exhibit, I wasn’t asking people to sit down at a computer and take a survey, which seemed to feel more like a test to some people. Or it could have been the exciting new technology I was using, the eye-tracker, that was appealing to some.

Interestingly, I also had a lot of folks observe their partners as the experiment happened, rather than wander off and meet up later, which happened more with the salmon exhibit prototyping, perhaps because there was not much to see if one person was using the exhibit. With the eye-tracking and the Magic Planet, it was still possible to view the images on the globe because it is such a large exhibit. Will we ever solve the mystery of what makes the perfect day for data collection? Probably not, but it does present a good opportunity for reflection on what did and didn’t seem to work to get the best sample of your visitorship. The cameras we’re installing are of course intended to shed some light on how representative these samples are.

What other influences have you seen that affect whether you have a successful or slow day collecting exhibit use data?

 

“Reverse” ground-truthing

We’ve recently been prototyping a new exhibit with standard on-the-ground methods, and now we’re going to use the cameras to do a sort of reverse ground-truthing. Over our busy Whale Watch Week between Christmas and New Year’s, Laura set up a camera on the exhibit to collect data on people using the exhibit at times when we didn’t have an observer in place. So in this case, instead of ground-truthing the cameras, we’re sort of doing the opposite, and checking what we found with the in-person observer.

However, the camera will be on at the same time that the researcher is there, too. It almost sounds like we’ll be spying on our researcher and “checking up,” but it will be an interesting check of both our earlier observations without the camera in place, as well as a chance to observe a) people using the new exhibit without a researcher in place, b) people using it *with* a researcher observing them (and maybe noticing the observer, or possibly not), and c) whether people behave differently as well as how much we can capture with a different camera angle than the on-the-ground observer will have.

Some expectations:

The camera should have the advantage of replay which the in-person observer won’t, so we can get an idea of how much might be missed, especially detail-wise.

The camera audio might be better than a researcher standing a ways away, but as our earlier blog posts have mentioned, the audio testing is very much a work in progress.

The camera angle, especially since it’s a single, fixed camera at this point, will be worse than the flexible researcher-in-place, as it will be at a higher angle, and the visitors may block what they’re doing a good portion of the time.

 

As we go forward and check the automated collection of our system with in-place observers, rather than the other way around, these are the sorts of things we’ll be checking for, advantages and disadvantages.

What else do you all expect the camera might provide better or worse than a in-person researcher?

 

Food for thought: Ethics of sharing video

Question: should we make available some of the HMSC VC footage for viewing to anyone who wants to see it? I was thinking the other day about what footage we could share with the field at large, as sharing is part of our mandate in the grant. Would it be helpful, for instance, to be able to see what goes on in our center, and maybe play around with viewing our visitors if you were considering either:

a) being a visiting scholar and seeing what we can offer

b) installing such cameras in your center

c) just seeing what goes on in a science center?

Obviously this brings up ethical questions, but for example, the Milestone Systems folks who made the iPad app for their surveillance system do put the footage from their cameras inside and outside their office building out there for anyone with the app to access. Do they have signs telling people walking up to, or in and around, their building that that’s the case? I would guess not.

I don’t mean that we should share audio, just video, but our visitors will already presumably know they are being recorded. What other considerations come up if we share the live footage? Others won’t be able to record or download footage through the app.

What would your visitors think?

Right now, we can set up profiles for an unlimited number of people who contact us to access the footage with a username and password, but I’m talking about putting it out there for anyone to find. What are the advantages, other than being able to circumvent contacting us for the login info? Other possible disadvantages: bandwidth problems, as we’ve already been experiencing.

So, chew over this food for thought on this Christmas eve, and let us know what you think.

What’s your recruitment style?

Last week, Dr. Rowe and I visited Portland Art Museum to help assist with a recruitment push for participants in their Conversations About Art evaluation and I noticed all of the education staff involved have very different styles of how they recruited visitors to participate in the project. Styles ranged from the apologetic (e.g. “do you mind if I interrupt you to help us”), to incentive-focused (e.g. “get free tickets!) to experiential (e.g. “participating will be fun and informative!”)

This got me thinking a lot about  the significance of people skills and a researcher’s recruitment style in educational studies this week. How does the style in which you get participants involved influence a) how many participants you actually recruit, and b) the quality of the participation (i.e. do they just go through the motions to get the freebie incentive?) Thinking back to prior studies of FCL alum here from OSU, I realized that nearly all the researchers I knew had a different approach to recruitment, be it in person, on the phone or via email, and that in fact it is a learned skill that we don’t often talk too much about.

I’ve been grateful for my success at recruiting both docents and visitors for my research on docent-visitor interactions, which is mostly the result of taking the “help a graduate student complete their research” approach – one that I borrowed from interacting with prior Marine Resource Management colleagues of mine, Abby Nickels and Alicia Christensen during their masters research on marine education activities. Such an approach won’t be much help in the future once I finally get out of grad school, so the question to consider is what factors make for successful participant recruitment? It seems the common denominator is people skills, and by people skills I mean the ability to engage a potential recruit on a level that removes skepticism around being commandeered off the street.  You have to be not only trustworthy, but also approachable. I’ve definitely noticed with my own work that on off days where I’m tired and have trouble maintaining a smiley face for long periods of time at the HMSC entrance, recruitment seems harder. All those younger years spent in customer service jobs and learning how to deal with the public in general seem so much more worthwhile!

So fellow researchers and evaluators, my question for you is what are your strategies for recruiting participants? Do you agree people skills are an important underlying factor? Do you over/under estimate your own personal influence on participant recruitment?

 

 

 

The Juggling Act of the IRB process

I want to talk today about what many of us here have alluded to in other posts: the approval (and beyond) process of conducting ethical human research. What grew out of really really unethical primarily medical research on humans many years ago now has evolved into something that can take up a great deal of your research time, especially on a large, long-duration grant such as ours. Many people (including me, until recently) thought of this process as primarily something to be done up-front: get approval, then sort of forgotten about except for the actual gaining of consent as you go and unless you significantly change your research questions or process. Wrong! It’s a much more constant, living thing.

We at the Visitor Center have several things that make us a weird case for our Institutional Review Board office at the university. First, even though it is generally educational research that we do, as part of the Science and Mathematics Education program, our research sites (the Visitor Center and other community-based locations) are not typically “approved educational research settings” such as classrooms. Classrooms have been so frequently used over the years that they have a more streamlined approval process unless you’re introducing a radically different type of experiment. Second, we’re a place where we have several types of visitor populations: the general public, OSU student groups, and K-12 school and camp groups, who each have different levels of privacy expectations, requirements for attending (public: none, OSU school groups: may be part of a grade), and thus different levels and forms of obtaining consent to do research required. Plus, we’re trying to video record our entire population, so getting signatures from 150,000+ visitors per year just isn’t feasible. However, some of the research we’re doing will be our typical video recording that is more in-depth than just the anonymized overall timing and tracking and visitor recognition from exhibit to exhibit.

What this means is a whole stack of IRB protocols that someone has to manage. At current count, I am managing four: one for my thesis, one for eyetracking in the Visitor Center for looking at posters and such, one for a side project involving concept mapping, and one for the general overarching video recording for the VC. The first three have been approved and the last one is in the middle of several rounds of negotiation on signage, etc., as I’ve mentioned before. Next up we need to write a protocol for the wave tank video reflections, and one for groundtruthing the video-recording-to-automatic-timing-tracking-and-face-recognition data collection. In the meantime, the concept mapping protocol has been open for a year and needs to be closed. My thesis protocol has bee approved nearly as long, went through several deviations in which I did things out of order or without getting updated approval from IRB, and now itself soon needs to be renewed. Plus, we already have revisions to the video recording protocol staff once the original approval happens. Thank goodness the eyetracking protocol is already in place and in a sweet spot time-wise (not needing renewal very soon), as we have to collect some data around eyetracking and our Magic Planet for an upcoming conference, though I did have to check it thoroughly to make sure what we want to do in this case falls under what’s been approved.

On the positive side, though, we have a fabulous IRB office that is willing to work with us as we break new ground in visitor research. Among them, us, and the OSU legal team we are crafting a strategy that we hope will be useful to other informal learning institutions as they proceed with their own research. Without their cooperation, though, very little of our grand plan would be able to be realized. Funders are starting to realize this, too, and before they make a final award for a grant they require proof that you’ve discussed the basics of your project at least with your IRB office and they’re on board.

Page 1 of 3:1 2 3 »
%d bloggers like this: