Question: should we make available some of the HMSC VC footage for viewing to anyone who wants to see it? I was thinking the other day about what footage we could share with the field at large, as sharing is part of our mandate in the grant. Would it be helpful, for instance, to be able to see what goes on in our center, and maybe play around with viewing our visitors if you were considering either:

a) being a visiting scholar and seeing what we can offer

b) installing such cameras in your center

c) just seeing what goes on in a science center?

Obviously this brings up ethical questions, but for example, the Milestone Systems folks who made the iPad app for their surveillance system do put the footage from their cameras inside and outside their office building out there for anyone with the app to access. Do they have signs telling people walking up to, or in and around, their building that that’s the case? I would guess not.

I don’t mean that we should share audio, just video, but our visitors will already presumably know they are being recorded. What other considerations come up if we share the live footage? Others won’t be able to record or download footage through the app.

What would your visitors think?

Right now, we can set up profiles for an unlimited number of people who contact us to access the footage with a username and password, but I’m talking about putting it out there for anyone to find. What are the advantages, other than being able to circumvent contacting us for the login info? Other possible disadvantages: bandwidth problems, as we’ve already been experiencing.

So, chew over this food for thought on this Christmas eve, and let us know what you think.

Or at least across the globe, for now. One of the major goals of this project is building a platform that is mobile, both around the science center and beyond. So as I travel this holiday season, I’ll be testing some of these tools on the road, as we prepare for visiting scholars. We want the scholars to be able to come to work for about a month and set the system up as they like for capturing the interactions that provide the data they’re interested in. Then we want them to have the ability to log in to the system from their home institutions, continuing to collect and analyze data from home. The first step in testing that lies with those of us who are living in Corvallis and commuting to the center in Newport only a couple times a week.

To that end, we’re starting with a couple more PC laptops, one for the eye-tracker analysis software, and one more devoted to the higher-processing needs of the surveillance system. The video analysis from afar is mostly a matter of getting the servers set up on our end, as the client software is free to install on an unlimited number of machines. But, as I described in earlier posts (here and here), we’ve been re-arranging cameras, installing more servers (we’re now up to one master and two slaves, with the one master dedicated to serving the clients, and each slave handling about half the cameras), and trying to test out the data-grabbing abilities from afar. Our partner in New Zealand had us extend the data recording time after the motion sensors decide there’s nothing going on in order to try and fix frame drop problems during the export. We’re also installing a honking lot more ethernet capability in the next week or so to hopefully handle our bandwidth better. I’ll be testing the video export on the road myself this week.

Then there’s the eye-tracker. It’s a different case, as it has proprietary data analysis software that has a per-user license. We have two, so that I can analyze my thesis data separately from any data collection that may now take place at the center, such as what I’m testing for an upcoming conference presentation on eye-tracking in museums. It’s not really that the eye-tracker itself is heavy, but with the laptop and all the associated cords, it gets cumbersome to go back and forth all the time, and I’d rather not have the responsibility of moving that $30K equipment any more than I have to (I don’t think it’s covered under my renter’s insurance for the nights it would be stored there in between campuses). So I’ve been working on setting up the software on the other new analysis laptop. Now I’m running into license issues, though I think otherwise the actual data transfer from one system to another is ok (except my files are pretty big – 2GB of data – just enough that it’s been a manual, rather than web-based, transfer so far).

And with that, I’m off to start that “eye-tracking … across the universe” (with apologies to the writers of the original Star Trek parody).

I want to talk today about what many of us here have alluded to in other posts: the approval (and beyond) process of conducting ethical human research. What grew out of really really unethical primarily medical research on humans many years ago now has evolved into something that can take up a great deal of your research time, especially on a large, long-duration grant such as ours. Many people (including me, until recently) thought of this process as primarily something to be done up-front: get approval, then sort of forgotten about except for the actual gaining of consent as you go and unless you significantly change your research questions or process. Wrong! It’s a much more constant, living thing.

We at the Visitor Center have several things that make us a weird case for our Institutional Review Board office at the university. First, even though it is generally educational research that we do, as part of the Science and Mathematics Education program, our research sites (the Visitor Center and other community-based locations) are not typically “approved educational research settings” such as classrooms. Classrooms have been so frequently used over the years that they have a more streamlined approval process unless you’re introducing a radically different type of experiment. Second, we’re a place where we have several types of visitor populations: the general public, OSU student groups, and K-12 school and camp groups, who each have different levels of privacy expectations, requirements for attending (public: none, OSU school groups: may be part of a grade), and thus different levels and forms of obtaining consent to do research required. Plus, we’re trying to video record our entire population, so getting signatures from 150,000+ visitors per year just isn’t feasible. However, some of the research we’re doing will be our typical video recording that is more in-depth than just the anonymized overall timing and tracking and visitor recognition from exhibit to exhibit.

What this means is a whole stack of IRB protocols that someone has to manage. At current count, I am managing four: one for my thesis, one for eyetracking in the Visitor Center for looking at posters and such, one for a side project involving concept mapping, and one for the general overarching video recording for the VC. The first three have been approved and the last one is in the middle of several rounds of negotiation on signage, etc., as I’ve mentioned before. Next up we need to write a protocol for the wave tank video reflections, and one for groundtruthing the video-recording-to-automatic-timing-tracking-and-face-recognition data collection. In the meantime, the concept mapping protocol has been open for a year and needs to be closed. My thesis protocol has bee approved nearly as long, went through several deviations in which I did things out of order or without getting updated approval from IRB, and now itself soon needs to be renewed. Plus, we already have revisions to the video recording protocol staff once the original approval happens. Thank goodness the eyetracking protocol is already in place and in a sweet spot time-wise (not needing renewal very soon), as we have to collect some data around eyetracking and our Magic Planet for an upcoming conference, though I did have to check it thoroughly to make sure what we want to do in this case falls under what’s been approved.

On the positive side, though, we have a fabulous IRB office that is willing to work with us as we break new ground in visitor research. Among them, us, and the OSU legal team we are crafting a strategy that we hope will be useful to other informal learning institutions as they proceed with their own research. Without their cooperation, though, very little of our grand plan would be able to be realized. Funders are starting to realize this, too, and before they make a final award for a grant they require proof that you’ve discussed the basics of your project at least with your IRB office and they’re on board.

Here’s a roundup of some of our technology testing and progress lately.

First, reflections from our partners Dr. Jim Kisiel and Tamara Galvan at California State University, Long Beach. Tamara recently tested the iPad and QuestionPro/Survey Pocket, Looxcie cameras and a few other apps to conduct surveys in the Long Beach Aquarium, which doesn’t have wifi in the exhibit areas. Here is Jim’s report on their usefulness:

“[We] found the iPad to be very useful.  Tamara used it as a way to track, simply drawing on a pdf and indicating times and patterns, using the app Notability.  We simply imported a pdf of the floorplan, and then duplicated it each time for each track.  Noting much more than times, however, might prove difficult, due to the precision of a stylus.  One thing that would make this even better would be having a clock right on the screen.  Notability does allow for recording, and a timer that goes into play when the recording is started.  This actually might be a nice complement, as it does allow for data collector notes during the session. Tamara was unable to use this feature, though, due to the fact that the iPad could only run one recording device at a time–and she had the looxcie hooked up during all of this. 

Regarding the looxcie.  Tamara had mixed results with this.  While it was handy to record remotely, she found that there were many signal drop-outs where the mic lost contact with the iPad.  We aren’t sure whether this was a limitation of the bluetooth and distance, or whether there was just too much interference in the exhibit halls.  While looxcie would have been ideal for turning on/off the device, the tendency to drop communication between devices sometimes made it difficult to activate the looxcie to turn on.  As such, she often just turned on the looxcie at the start of the encounter.  It is also worth noting that Tamara used the looxcie as an audio device only, and sound quality was fine.
 
Tamara had mixed experiences with Survey Pocket.  Aside from some of the formatting limitations, we weren’t sure how effective it was for open-ended questions.  I was hoping that there was a program that would allow for an audio recording of such responses.  She did manage to create a list of key words that she checked off during the open-ended questions, in addition to jotting down what the interviewee said.  This seemed to work OK.  She also had some issues syncing her data–at one point, it looked like much of her data had been lost, due in part to … [problems transferring] her data from the iPad/cloud back to her computer.  However, staff was helpful and eventually recovered the data.
 
Other things:  The iPad holder (Handstand) was very handy and people seemed OK with using it to complete a few demographic questions. Having the tracking info on the pad made it easier to juggle papers, although she still needed to bring her IRB consent forms with her for distribution. In the future, I think we’ll look to incorporate the IRB into the survey in some way.”
Interestingly, I just discovered that a new version of SurveyPocket *does* allow audio input for open-ended questions. However, OSU has recently purchased university-wide licenses from a different survey company, Qualtrics, who as yet do not have an offline app mode for tablet-based data collection. It seems to be in development, though, so we may change our minds about the company we go with when the QuestionPro/SurveyPocket license is up for renewal next year. It’s amazing how the amount of research I did on these apps last year is almost already out of date.
Along the same lines of software updates kinda messing up your well-laid plans, we’re purchasing a couple of laptops to do more data analysis away from the video camera system desktop computer and away from the eyetracker. We suddenly were confronted with the Windows 8 vs Windows 7 dilemma, though – the software for both of these systems is Windows 7-based, but now that Windows 8 is out, the school had to make a call as to whether or not to upgrade. Luckily for us, we’re skipping Windows 8 for the moment, which enables us to actually use the software on the new laptops since we will still go with Windows 7 for them, and the software programs themselves for the cameras and eye tracker won’t likely be Windows 8 ready until sometime in the new year.
Lastly, we’re still bulking up our capacity for data storage and sharing, as well as internet for video data collection. I have recently put in another new server to be dedicated to handle the sharing of data, with the older 2 servers as slaves and the cameras spread out between them. In addition, we put in a NAS storage system and five 3TB hard drives for storage. Mark assures me we’re getting to the point of having this “initial installation” of stuff finalized …

And I don’t just mean Thanksgiving! Lately, I’ve run across an exhibit, a discussion, and now an article on things wearing down and breaking, so I figured that meant it was time for a blog post.

It started with my visit to the Exploratorium, who find that stuff breaks, sometimes unexpectedly. Master tinkerers and builders that they are, they made it into an exhibit of worn, bent or flat-out broken parts of their exhibits. It may take hundreds or even hundreds of thousands of uses, but when your visitorship is near a million per year, it doesn’t take that many days to find micro-changes suddenly visible as macro changes.

 

Then Laura suggested that we keep track of all the equipment we’ve been buying in case of, you guessed it, breaking (or other loss). So we’ve started an inventory that not only will serve as a nice record for the project of all the bits and bobs we’ve had to buy (so far, over 300 feet of speaker wire for just 10 cameras), but also will help us replace them more easily should something go wrong. Which we know it will, eventually, and frankly, we’ll have a sense of how quickly it goes wrong if we keep our records well. In our water-laden touch pools and wave tanks environment, this very likely will be sooner than we hope.

Finally, John Baek’s Open and Online Lifelong Learning newspaper linked to this story from Wired magazine about the people who are deliberately trying to break things, to make the unexpected expected.

So, have a great Thanksgiving break (in the U.S.), and try not to break anything in the process.

Well the data collection for my research has been underway for nearly 2 months now, how time flies! For those of you new to this project, my research centers on documenting the practice of science center docents as they interact with visitors. Data collection includes video observations of voluntary docents at HMSC using “visitor-mounted” looxcie cameras, as well as pre- and post-observation interviews with those participating docents.

“Visitor-eye view using the looxcies”

My current focus is getting the video observations of  each of the 10 participating docents collected. In order to conduct a post observation interview (which asks docents to reflect on their practice), I need to get about 10-15 minutes of video data of each of the docents interacting with the public. This doesn’t sound like much, but when you can’t guarantee a recruited family will interact with a recruited docent,  and an actual interaction will likely only last from 30 seconds to a few minutes, it takes a fair few families wearing cameras to get what you need. However, I’m finding this process really enjoyable both in getting to know the docents and meeting visitors.

When I first started this project I was worried that visitors would be a little repelled about the idea of having their whole visit recorded. What I’m actually finding is that either a) they want to help the poor grad student complete her thesis, b) they think the cameras are fun and “want a go” or c) they totally want one of the HMSC tote bags being used as an incentive (what can I say, everyone loves free stuff right?!) The enthusiasm for the cameras has gone as far as one gentleman running up to a docent, jumping up and down and shouting “I’m wearing a camera, I’m wearing a camera!” Additionally, and for those star trek fans out there, a number of visitors and colleagues alike have remarked how much wearing a looxcie makes a person look like a borg (i.e. cyborg), particularly with that red light thing…

Now how, may you ask, does that not influence those lovely naturalistic interactions you’re supposed to be observing? Well, as many of us qualitative researchers know, that unless you hide the fact you are observing a person (an element our IRB process is not particularly fond of) you can never truly remove that influence, but you can assume that if particular practices are observed often enough, they are part of the landscape you are observing. The influence of the cameras may alter how naturalistic that interaction may be, but that interaction is still a reflection of social behaviors taking place. People do not completely change their personality and ways of life simply because a camera is around; more likely any behavior changes may simply be over- or under-exaggerated normative actions. And I am finding patterns, lots of patterns, in the discourse and action taking place between docents and visitors.

However, I am paying attention to how visitors and docents react to the cameras. When filtering the footage for interactions, I look out for any discourse that indicates camera influence is an issue. As examples, the docent in the “jumping man” footage reacts surprised to the man’s sudden shouting, open’s his eyes wide and nervously laughs – to which I noted on the video that the interaction from then on may irregular. In one clip I have a docent talking non-stop about waves seemingly without taking a breath for nearly 8 minutes – to which I noted seemed unnatural in comparison to their other shorter dialogue events. Another clip has a docent bursting out laughing at a visitor wearing one of the looxices attached to his baseball cap using a special clip I have (not something I expected!) – to which I noted would have likely made the ability for the visitor to forget about the looxcie less possible.

All in all, however, most visitors remark they actually forget they are wearing the camera as they visit goes on, simply because they are distracted by their actual visit. This makes me happy, as the purpose of incorporating the looxcies was to reduce the influence of being videod as a whole. Visitors forget to a point where, during pilots, one man actually walked into the bathroom wearing his looxcie, and recorded some footage I wasn’t exactly intending to observe… suffice to say, I instantly deleted that video and and updated my recruitment spiel to include a reminder not to take the cameras in to the bathroom. Social science never ceases to surprise me!