Summer is flying by and the hard work in the Cyberlab continues.  If you have been keeping up with previous posts, we have had researchers in residence as part of our Cyber Scholar program, movement on our facial recognition camera installations, and conference presentations taking place around the country and internationally.  Sometimes I forget just how amazing the implementation of unobtrusive audio and video collection methods are to the field of visitor research and exhibit evaluation until I talk to another researcher or educator working at another informal learning center.  The methods and tools we are applying have huge implications to streamlining these types of projects.  It is exciting to be a part of an innovative project in an effort to understand free choice learning and after a year in the lab, I have gained several new skills, particularly learning by doing.

As with any research, or project in general, there are highs and lows with trying to get things done and working.  Ideally, everything will work the first time (or when plugged in), there are no delays, and moving forward is the only direction.  Of course in reality there are tool constraints, pieces to reconsider and reconfigure, and several starts and stops in an effort to figure it out.  There is no Cyberlab “manual” – we are creating it as we go – and this has been a great lesson for me personally when it comes to my approach to both personal and professional experiences, particularly with future opportunities in research.

Speaking of research, this past week I started the data that will go towards my Master’s thesis.  As I am looking at family interactions and evidence of learning behaviors around the Ideum touchtable, I am getting the chance to use the tools of the Cyberlab, but also gain experience recruiting and interviewing visitors.  My data collection will last throughout the month of August, as I perform sampling during morning and afternoon hours on every day of the week.  This will allow for a broad spectrum of visitors, though I am purposively sampling “multi-generational” family groups, or at least one adult and one child using the exhibit.  After at least one minute of table use, I am interviewing the group about their experience using the touch table, and will be looking at the footage to further analyze what types of learning behaviors may be occurring.

During my observations, I have been reflecting on my time as an undergraduate conducting research in marine biology.  At that point, I was looking at distribution and feeding habitats of orange sea cucumbers in the Puget Sound.  Now the “wildlife” I am studying is the human species and as I sit and observe from a distance, I think about how wildlife biologists wait in the brush for the animal they are studying to approach, interact, and depart the area.  Over the course of my sampling sessions I am waiting for a family group to approach, interact, and depart the gallery.  There are so many questions that I have been thinking about with regards to family behavior in a public science center.  How do they move through the space, what exhibits attract particular age groups, how long do they decide to stay in any particular area, and what do they discuss while they are there?  I am excited to begin analyzing the data I will get.  No doubt it will likely lead to more questions…

Having more time to do research, of course! With the pressures and schedules of classes over, students everywhere are turning to a dedicated stretch of research work, either on their own theses and dissertations, or for paid research jobs, or internships. That means, with Laura and I graduating, there should be a new student taking over the Cyberlab duties soon. However, the other thing that summer means is the final push to nail down funding for the fall, and thus, our replacement is not yet actually identified.

In the meantime, though, Laura and I have managed to do a pretty thorough soup-t0-nuts inventory of the lab’s progress over the last couple years for the next researchers to hopefully pick up and run with:

Technology: Cameras are pretty much in and running smoothly. Laura and I have worked a lot of the glitches out, and I think we have the installation down  to a relatively smooth system of placing a camera, aligning it, and installing it physically, then setting it up on the servers and getting it set for everyone’s use. I’ve got a manual down that I think spells out the process start to finish. We’ve also got expanded network capability coming in the form of our own switch, which should help traffic.

Microphones, however, are a different story. We are still torn between installing mics in our lovely stone exhibitry around the touch tanks or just going with what the cameras pick up with built-in mics. The tradeoff is between damaging the rock enclosure or having clearer audio not garbled by the running water of the exhibit. We may be able to hang mics from the ceiling, but that testing will be left to those who follow. It’s less of a crucial point right now, however, as we don’t have any way to automate audio processing.

Software development for facial recognition is progressing as our Media Macros contractors are heading to training on the new system they are building into our overall video analysis package. Hopefully we’ll have that in testing this next school year.

Eye-tracking is really ironed out, too. We have a couple more issues to figure out around tracking on the Magic Planet in particular, but otherwise even the stand-alone tracking is ready to go, and I have trained a couple folks on how to run studies. Between that and the manuals I compiled, hopefully that’s work that can continue without much lag and certainly without as much learning time as it took me to work out a lot of kinks.

Exhibit-wise, the wave tanks are all installed and getting put through their paces with the influx of end-of-year school groups. Maybe even starting to leak a little bit as the wear-and-tear kicks in. We are re-conceptualizing the climate change exhibit and haven’t started planning the remodeling of the remote-sensing exhibit room and Magic Planet. Those two should be up for real progress this year, too.

Beyond that, pending IRB approval due any day for the main video system, we should be very close to collecting research data. We planned a list of things that we need to look at for each of the questions in the grant, and there are pieces that the new researcher can get started on right away to start groundtruthing the use of video observations to study exhibits as well as answering questions about the build-and-test nature of the tsunami wave tank. We have also outlined a brief plan for managing the data as I mentioned a couple posts ago.

That makes this my last post as research assistant for the lab. Stay tuned; you’re guaranteed to hear from the new team soon. You might even hear from me as I go forth and test using the cameras from the other side of the country!

 

I want to talk today about what many of us here have alluded to in other posts: the approval (and beyond) process of conducting ethical human research. What grew out of really really unethical primarily medical research on humans many years ago now has evolved into something that can take up a great deal of your research time, especially on a large, long-duration grant such as ours. Many people (including me, until recently) thought of this process as primarily something to be done up-front: get approval, then sort of forgotten about except for the actual gaining of consent as you go and unless you significantly change your research questions or process. Wrong! It’s a much more constant, living thing.

We at the Visitor Center have several things that make us a weird case for our Institutional Review Board office at the university. First, even though it is generally educational research that we do, as part of the Science and Mathematics Education program, our research sites (the Visitor Center and other community-based locations) are not typically “approved educational research settings” such as classrooms. Classrooms have been so frequently used over the years that they have a more streamlined approval process unless you’re introducing a radically different type of experiment. Second, we’re a place where we have several types of visitor populations: the general public, OSU student groups, and K-12 school and camp groups, who each have different levels of privacy expectations, requirements for attending (public: none, OSU school groups: may be part of a grade), and thus different levels and forms of obtaining consent to do research required. Plus, we’re trying to video record our entire population, so getting signatures from 150,000+ visitors per year just isn’t feasible. However, some of the research we’re doing will be our typical video recording that is more in-depth than just the anonymized overall timing and tracking and visitor recognition from exhibit to exhibit.

What this means is a whole stack of IRB protocols that someone has to manage. At current count, I am managing four: one for my thesis, one for eyetracking in the Visitor Center for looking at posters and such, one for a side project involving concept mapping, and one for the general overarching video recording for the VC. The first three have been approved and the last one is in the middle of several rounds of negotiation on signage, etc., as I’ve mentioned before. Next up we need to write a protocol for the wave tank video reflections, and one for groundtruthing the video-recording-to-automatic-timing-tracking-and-face-recognition data collection. In the meantime, the concept mapping protocol has been open for a year and needs to be closed. My thesis protocol has bee approved nearly as long, went through several deviations in which I did things out of order or without getting updated approval from IRB, and now itself soon needs to be renewed. Plus, we already have revisions to the video recording protocol staff once the original approval happens. Thank goodness the eyetracking protocol is already in place and in a sweet spot time-wise (not needing renewal very soon), as we have to collect some data around eyetracking and our Magic Planet for an upcoming conference, though I did have to check it thoroughly to make sure what we want to do in this case falls under what’s been approved.

On the positive side, though, we have a fabulous IRB office that is willing to work with us as we break new ground in visitor research. Among them, us, and the OSU legal team we are crafting a strategy that we hope will be useful to other informal learning institutions as they proceed with their own research. Without their cooperation, though, very little of our grand plan would be able to be realized. Funders are starting to realize this, too, and before they make a final award for a grant they require proof that you’ve discussed the basics of your project at least with your IRB office and they’re on board.

Despite our fancy technology, there are some pieces of data we have to gather the old-fashioned way: by asking visitors. One piece we’d like to know is why visitors chose to visit on this particular occasion. We’re building off of John Falk’s museum visitor motivation and identity work, which began with a survey that asks visitors to rate a series of statements on Likert (1-5) scales as to how applicable they are for them that day, and reveals a rather small set of motives driving the majority of visits. We also have used this framework in a study of three of our local informal science education venues, finding that an abbreviated version works equally well to determine which (if any) of these motivations drives visitors. The latest version, tried at the Indianapolis Museum of Art, uses photos along with the abbreviated number of statements for the visitors to identify their visit motivations.

We’re implementing a version on an iPad kiosk in the VC for a couple of reasons: first, we genuinely want to know why folks are visiting, and want to be able to correlate identity motivations with the automated behavior, timing, and tracking data we collect from the cameras. Second, we hope people will stop long enough for us to get a good reference photo for the facial recognition system. Sneaky, perhaps, but it’s not the only place we’re trying to position cameras for good reference shots. And if all goes well with our signage, visitors will be more aware than ever that we’re doing research, and that it is ultimately aimed at improving their experience. Hopefully that awareness will allay most of the final fears about the embedded research tools that we are hoping will be minimal to start with.

How do we get signs in front of visitors so they will actually read them? Think about how many signs at the front door of your favorite establishment you walk past without reading. How many street signs, billboards, and on-vehicle ads pass through our vision barely a blur? While exhibit designers spend many an hour toiling away to create the perfect signs to offer visitors some background and possible ways to interact with objects, many visitors gloss right over them, preferring to just start interacting or looking in their own way. This may be a fine alternative use for most cases, but in the case of our video research and the associated informed consent that our subjects need to offer, signs at the front door are going to be our best bet to inform visitors but not unduly interrupt their experience, or make museum entry and additional unreasonable burden for visitors or staff. Plus, the video recording is not optional at this point for folks who visit; you can visit and be recorded, or you can’t visit.

Thankfully, we have the benefit of the Exploratorium and other museums who have done video research in certain exhibits and have tested signs at their entrances and the percentage of visitors who subsequently know they’re being recorded for research. Two studies by the Exploratorium staff showed that their signs at entrances to specifically cordoned-off areas stating that videotaping for research was in progress were effective at informing 99% of visitors to the exhibit areas that a) videotaping was happening and b) it was for research. One interesting point is that their testing of the signs themselves and the language on them revealed that the camera icon needed to be rather old-school/highly professional looking to distinguish itself from the average visitor making home movies while visiting a museum and be clearly associated with official research purposes.


Source: store.sony.com via Free-Choice on Pinterest

Never mind the cameras we’re actually using look more like surveillance cameras.

 

So our strategy, crafted with our Institutional Review Board, is several-fold. Signs at the front entrance (and the back entrance, for staff and volunteers, and other HMSC visitors who might be touring the entire research facility for other reasons and popping in to the VC) will feature the large research camera and a few, hopefully succinct and clear words about the reasons we’re doing research, and where to get more information. We also have smaller signs on some of the cameras themselves with a short blurb about the fact that it’s there for research purposes. Next, we’re making handouts for people that will explain in more detail what our research is about and how the videos help us with that work. We’ll also put that information on our web site, and add the address of the video research information to our rack cards and other promotional material we send around town and Oregon. Of course, our staff and volunteers are also being included in the process so they are well-equipped to answer visitor questions.

Then there’s the thorny issue of students. University students who are over 18 who are visiting as part of a required class will have to individually consent due to federal FERPA regulations. We’re working with the IRB to make this as seamless a process as possible. We’ll be contacting local school superintendents to let them know about the research and let them inform parents of any class that will be attending on a field trip. These students on class field trips will be assumed to have parental consent by virtue of having signed school permission slips to attend Hatfield.

Hopefully this will all work. The Exploratorium’s work showed that even most people who didn’t realize they were being recorded were not bothered much by the recording, and even fewer would have avoided the area if they’d actually known before hand. As always, though, it will be a work-in-progress as we get visitor and volunteer feedback and move forward with the research.

Gutwill, J. (2003). “Gaining visitor consent for research II: Improving the posted-sign method.” Curator
46(2): 228-235

Gutwill, J. (2002). “Gaining visitor consent for research: Testing the posted-sign method.” Curator 45(3): 232-238.

We’re ready for round 2 of camera placement, having met with Lab advisor Sigrid Norris on Monday. We’ll go back to focusing on the wave- and touch-tank areas and getting full coverage of interactions. Basically, our first test left us spread too thin to really capture what’s going on, and our programmer said face detection and recognition is not robust enough to be able to track visitors through the whole center  yet anyway. Though now of course we’re running out of ethernet ports in the front half of the Visitor Center for those extra cameras.

One thing we had been noticing with the cameras was a lot of footage of “backs and butts” as people walk away from one camera or are facing a different exhibit. Sigrid’s take on this is that it is actually valuable data, capturing multimodal communication modes of posture and foot and body position. This is especially true for peripheral participants, such as group members who are watching more than driving the activity, or other visitors learning how to use exhibits by watching those who are there first.

We did figure out the network issue that was causing the video stoppage/skipping. The cameras had been set up all on the same server and assumed to share the load between the two servers for the system, but they needed to be set up on both servers in order to make the load sharing work. This requires some one-time administrative configuration work on the back end, but the client (what the researchers using the system see) still displays all camera feeds regardless of what server is driving which camera at any given time. So now it’s all hunky dory.

The wave tanks are also getting some redesigns after all the work and testing over the summer. The shore tank wave maker (the “actuator”) won’t be made of aluminum (too soft), and will have hydraulic braking to slow the handle as it reaches the end points. The wave energy tank buoys are getting finished, then that tank will be sealed and used to show electricity generation in houses and buildings set on top. We’ll also get new tables for all three tanks which will lack middle legs and give us probably a bit more space to work with for the final footprint. We’ll get the flooring replaced with wet lab flooring to prevent slip hazards and encourage drainage.