Happy new year everyone!

After all the fun and frivolities of the holiday season, I am left with not only the feeling that I probably shouldn’t have munched all those cookies and candies, but also the grave realization that crunch time for my dissertation has commenced. I’d like to have it completed by Spring and, just like Katie, I’ve hit the analysis phase of my research and am desperately trying not to fall into the pit of never-ending data. All those current and former graduate students out there, I’m sure you can relate to this – all those wonderful hours, weeks and months I have to look forward to of frantically trying to make sense of the vast pool of data I have spent the last year planning for and collecting.

 

But fear not! ’tis qualitative data sir! And seeing as I have really enjoyed working with my participants and collecting data so far, I am going to attempt to enjoy discovering the outcomes of all my hard work. To me, the beauty of working with qualitative data is developing the pictures of the answers to the questions that initiated the research in the first place. It’s a jigsaw puzzle with only knowing a rough idea of what the image might look like at the end – you slowly keep adding the pieces until that image comes clear. I’m looking forward to seeing that image.

So what do I have to analyze? Well, namely ~20 interviews with docents, ~75 docent observations, ~100 visitor surveys and 2 focus groups (which will hopefully take place in the next couple of weeks).  I will be using the  research analysis tool, Nvivo, which will aid me in cross-analyzing the different forms of data using a thematic coding approach – analyzing for reoccuring themes within each data set. What I’m particularly psyched about is getting into the video analysis of the participant observations, whereby I’m finally going to get the chance to unpack some of that docent practice I’ve been harping on about for the last two years. Here, I’ll be taking a little multimodal discourse analysis and a little activity theory to break down docent-visitor interaction and interpretative strategies observed.

Right now, the enthusiasm is high! Let’s see how long I can keep it up 🙂 It’s Kilimanjaro, but there’s no turning back now.

 

Question: should we make available some of the HMSC VC footage for viewing to anyone who wants to see it? I was thinking the other day about what footage we could share with the field at large, as sharing is part of our mandate in the grant. Would it be helpful, for instance, to be able to see what goes on in our center, and maybe play around with viewing our visitors if you were considering either:

a) being a visiting scholar and seeing what we can offer

b) installing such cameras in your center

c) just seeing what goes on in a science center?

Obviously this brings up ethical questions, but for example, the Milestone Systems folks who made the iPad app for their surveillance system do put the footage from their cameras inside and outside their office building out there for anyone with the app to access. Do they have signs telling people walking up to, or in and around, their building that that’s the case? I would guess not.

I don’t mean that we should share audio, just video, but our visitors will already presumably know they are being recorded. What other considerations come up if we share the live footage? Others won’t be able to record or download footage through the app.

What would your visitors think?

Right now, we can set up profiles for an unlimited number of people who contact us to access the footage with a username and password, but I’m talking about putting it out there for anyone to find. What are the advantages, other than being able to circumvent contacting us for the login info? Other possible disadvantages: bandwidth problems, as we’ve already been experiencing.

So, chew over this food for thought on this Christmas eve, and let us know what you think.

Or at least across the globe, for now. One of the major goals of this project is building a platform that is mobile, both around the science center and beyond. So as I travel this holiday season, I’ll be testing some of these tools on the road, as we prepare for visiting scholars. We want the scholars to be able to come to work for about a month and set the system up as they like for capturing the interactions that provide the data they’re interested in. Then we want them to have the ability to log in to the system from their home institutions, continuing to collect and analyze data from home. The first step in testing that lies with those of us who are living in Corvallis and commuting to the center in Newport only a couple times a week.

To that end, we’re starting with a couple more PC laptops, one for the eye-tracker analysis software, and one more devoted to the higher-processing needs of the surveillance system. The video analysis from afar is mostly a matter of getting the servers set up on our end, as the client software is free to install on an unlimited number of machines. But, as I described in earlier posts (here and here), we’ve been re-arranging cameras, installing more servers (we’re now up to one master and two slaves, with the one master dedicated to serving the clients, and each slave handling about half the cameras), and trying to test out the data-grabbing abilities from afar. Our partner in New Zealand had us extend the data recording time after the motion sensors decide there’s nothing going on in order to try and fix frame drop problems during the export. We’re also installing a honking lot more ethernet capability in the next week or so to hopefully handle our bandwidth better. I’ll be testing the video export on the road myself this week.

Then there’s the eye-tracker. It’s a different case, as it has proprietary data analysis software that has a per-user license. We have two, so that I can analyze my thesis data separately from any data collection that may now take place at the center, such as what I’m testing for an upcoming conference presentation on eye-tracking in museums. It’s not really that the eye-tracker itself is heavy, but with the laptop and all the associated cords, it gets cumbersome to go back and forth all the time, and I’d rather not have the responsibility of moving that $30K equipment any more than I have to (I don’t think it’s covered under my renter’s insurance for the nights it would be stored there in between campuses). So I’ve been working on setting up the software on the other new analysis laptop. Now I’m running into license issues, though I think otherwise the actual data transfer from one system to another is ok (except my files are pretty big – 2GB of data – just enough that it’s been a manual, rather than web-based, transfer so far).

And with that, I’m off to start that “eye-tracking … across the universe” (with apologies to the writers of the original Star Trek parody).

One of the great things about being in graduate school is the variety of experiences that are available in the competition for funding. Each one offers unique opportunities for growth and learning, but some are certainly more challenging than others. I’m currently working on a project that utilizes my skills in web design, but the requirements of the project are beyond what I was formerly able to perform. The past few weeks have been full of learning and expanding and lots of trial and error. I finally found a few useful printed books (especially the Drupal Bible) and with their help I’ve been more successful in building the website with the functionality I envisioned. There is still quite a ways to go, and it would be easier if I had direct access to the servers, but I’m still proud of the work I’ve been able to do and look forward to adding “web development” to my Curriculum Vitae.

(Since the website is still under quite a bit of construction, I have chosen not to release the URL at this point.)

A new partnership with the Philomath (pronounced fill-OH-muth for you out-of-town readers) High Robotics Engineering Division (PHRED) helped the HMSC Free-Choice learning lab overcome a major information design hurdle. An on-going challenge for our observation system is recording usage of small, non-electronic moveable exhibit components – think bones, shells, levers, and spinning wheels.

PHRED mentors Tom Health and Tom Thompson will work with students to develop tiny, wireless microprocessor sensors that can be attached to any physical moving exhibit component and report its use to our database. The team will be using the popular Arduino development tool that has been the technological heart of the Maker movement.

This is a great partnership – the PHRED team has all the skills, enthusiasm, and creativity to tackle the project and build successful tools – not to mention gaining the notoriety that comes from working on an NSF-funded project. Oregon Sea Grant gains more experience integrating after school science clubs into funded research projects, while meeting the ever-challenging objective of engaging underserved communities.
Thanks to Mark for this update. 

Well the data collection for my research has been underway for nearly 2 months now, how time flies! For those of you new to this project, my research centers on documenting the practice of science center docents as they interact with visitors. Data collection includes video observations of voluntary docents at HMSC using “visitor-mounted” looxcie cameras, as well as pre- and post-observation interviews with those participating docents.

“Visitor-eye view using the looxcies”

My current focus is getting the video observations of  each of the 10 participating docents collected. In order to conduct a post observation interview (which asks docents to reflect on their practice), I need to get about 10-15 minutes of video data of each of the docents interacting with the public. This doesn’t sound like much, but when you can’t guarantee a recruited family will interact with a recruited docent,  and an actual interaction will likely only last from 30 seconds to a few minutes, it takes a fair few families wearing cameras to get what you need. However, I’m finding this process really enjoyable both in getting to know the docents and meeting visitors.

When I first started this project I was worried that visitors would be a little repelled about the idea of having their whole visit recorded. What I’m actually finding is that either a) they want to help the poor grad student complete her thesis, b) they think the cameras are fun and “want a go” or c) they totally want one of the HMSC tote bags being used as an incentive (what can I say, everyone loves free stuff right?!) The enthusiasm for the cameras has gone as far as one gentleman running up to a docent, jumping up and down and shouting “I’m wearing a camera, I’m wearing a camera!” Additionally, and for those star trek fans out there, a number of visitors and colleagues alike have remarked how much wearing a looxcie makes a person look like a borg (i.e. cyborg), particularly with that red light thing…

Now how, may you ask, does that not influence those lovely naturalistic interactions you’re supposed to be observing? Well, as many of us qualitative researchers know, that unless you hide the fact you are observing a person (an element our IRB process is not particularly fond of) you can never truly remove that influence, but you can assume that if particular practices are observed often enough, they are part of the landscape you are observing. The influence of the cameras may alter how naturalistic that interaction may be, but that interaction is still a reflection of social behaviors taking place. People do not completely change their personality and ways of life simply because a camera is around; more likely any behavior changes may simply be over- or under-exaggerated normative actions. And I am finding patterns, lots of patterns, in the discourse and action taking place between docents and visitors.

However, I am paying attention to how visitors and docents react to the cameras. When filtering the footage for interactions, I look out for any discourse that indicates camera influence is an issue. As examples, the docent in the “jumping man” footage reacts surprised to the man’s sudden shouting, open’s his eyes wide and nervously laughs – to which I noted on the video that the interaction from then on may irregular. In one clip I have a docent talking non-stop about waves seemingly without taking a breath for nearly 8 minutes – to which I noted seemed unnatural in comparison to their other shorter dialogue events. Another clip has a docent bursting out laughing at a visitor wearing one of the looxices attached to his baseball cap using a special clip I have (not something I expected!) – to which I noted would have likely made the ability for the visitor to forget about the looxcie less possible.

All in all, however, most visitors remark they actually forget they are wearing the camera as they visit goes on, simply because they are distracted by their actual visit. This makes me happy, as the purpose of incorporating the looxcies was to reduce the influence of being videod as a whole. Visitors forget to a point where, during pilots, one man actually walked into the bathroom wearing his looxcie, and recorded some footage I wasn’t exactly intending to observe… suffice to say, I instantly deleted that video and and updated my recruitment spiel to include a reminder not to take the cameras in to the bathroom. Social science never ceases to surprise me!