Members of the Cyberlab were busy this week.  We set up the multi touch table and touch wall in the Visitors Center and hosted Kate Haley Goldman as a guest researcher.  In preparation for her visit, there were modifications to camera and table placement, tinkering with microphones, and testing the data collection pieces by looking at the video playback.  It was a great opportunity to evaluate our lab setup for other incoming researchers and their data collection needs, and to try things live with the technology of Ideum!

Kate traveled from Washington D.C. to collect data on the interactive content by Open Exhibits displayed on our table.  As the Principal of Audience Viewpoints, Kate conducts research on audiences and learning in museums and informal learning centers.  She is investigating the use of multi touch technology in these settings, and we are thankful for her insight as we implement this exhibit format at Hatfield Marine Science Center.

Watching the video playback of visitor interactions with Kate was fascinating.  We discussed flow patterns around the room based on table placement.  We looked at the amount of stay time at the table depending on program content.  As the day progressed, more questions came up.  How long were visitors staying at the other exhibits, which have live animals, versus the table placed nearby?  While they were moving about the room, would visitors return to the table multiple times?  What were the demographics of the users?  Were they bringing their social group with them?  What were the users talking about?  Was it the technology itself or the content on the table?  Was the technology intuitive to use?

I felt the thrill of the research process this weekend.  It was a wonderful opportunity to “observe the observer” and witness Kate in action.  I enjoyed seeing visitor use of the table and thinking about the interactions between humans and technology.  How effective is it to present science concepts in this format and are users learning something?  I will reflect on this experience as I design my research project around science learning and the use of multi touch technology in an informal learning environment such as Hatfield Marine Science Center.

The challenges of integrating the natural and social sciences are not news to us. After King, Keohane and Verba’s (KKV’s) book entitled “Designing Social Inquiry”, the field of qualitative methodology has achieved considerable attention and development. Their work generated great discussions about qualitative studies, as well as criticism, and sometimes misguided ideas that qualitative research is benefited by quantitative approaches but not the other way around. Since then, discussions in the literature debate the contrasts between observations of qualitative vs. quantitative studies, regression approaches vs. theoretical work, and the new approaches to mixed-methods design. Nevertheless, there are still many research frontiers for qualitative researchers to cross and significant resistance from existing conservative views of science, which question the validity of qualitative results.

Last week, while participating in the LOICZ symposium (Land-Ocean Interactions in the Coastal Zone) in Rio de Janeiro, Brazil, I was very encouraged by the apparent move towards an integrated approach between the natural and social sciences. There were many important scientists from all over the world and from many different disciplines discussing the Earth systems and contributing steps towards sustainability of the world’s coastal zone. Many of the students’ presentations, including mine, had some social research component. I had many positive conversations about the Cyberlab work in progress and how it sits at the edge of building capacity for scientists/researchers, educators, exhibit designers, civil society, etc.

However, even in this meeting, over dinner conversation, I stumbled into the conflicting views that are a part of the quantitative vs. qualitative debate — the understanding of scientific process as “only hypothesis driven”, where numbers and numbers alone offer the absolute “truth”. It is still a challenge for me not to become extremely frustrated while having to articulate the importance of social science in this case and swim against a current of uneducated opinions about the nature of what we do and disregard for what it ultimately accomplishes. I think it is more than proven in today’s world that understanding the biogeophysics of the Earth’s systems is essential, but that alone won’t solve the problems underlying the interaction of the natural and social worlds.  We cannot move towards a “sustainable future” without the work of social scientists, and I wish there would be more of a consensus about its place and importance within the natural science community.

So, in the spirit of “hard science”…

If I can’t have a research question, here are the null and alternative hypotheses I can investigate:

H0 “Moving towards a sustainable future is not possible without the integration of natural and social sciences”.

H1  “Moving towards a sustainable future is possible without the integration of natural and social science”

Although, empirical research can NEVER prove beyond the shadow of a doubt that a comparison is true (95 and 99% probability only), I think you would agree that, if these hypotheses could be tested, we would fail to reject the null.

With all that being said, I emphasize here today the work Cyberlab is doing and what it will accomplish in the future, sitting at the frontiers of marine science and science education. Exhibits such as the wave laboratory, the climate change exhibit on the works, the research already completed in the lab, the many projects and partnerships, etc. , are  prime examples of that. Cyberlab is contributing to a collaborative effort to the understanding and dissemination of marine and coastal issues, and building capacity to create effective steps towards sustainable land-ocean interactions.

I am very happy to be a part of it!

 

Having more time to do research, of course! With the pressures and schedules of classes over, students everywhere are turning to a dedicated stretch of research work, either on their own theses and dissertations, or for paid research jobs, or internships. That means, with Laura and I graduating, there should be a new student taking over the Cyberlab duties soon. However, the other thing that summer means is the final push to nail down funding for the fall, and thus, our replacement is not yet actually identified.

In the meantime, though, Laura and I have managed to do a pretty thorough soup-t0-nuts inventory of the lab’s progress over the last couple years for the next researchers to hopefully pick up and run with:

Technology: Cameras are pretty much in and running smoothly. Laura and I have worked a lot of the glitches out, and I think we have the installation down  to a relatively smooth system of placing a camera, aligning it, and installing it physically, then setting it up on the servers and getting it set for everyone’s use. I’ve got a manual down that I think spells out the process start to finish. We’ve also got expanded network capability coming in the form of our own switch, which should help traffic.

Microphones, however, are a different story. We are still torn between installing mics in our lovely stone exhibitry around the touch tanks or just going with what the cameras pick up with built-in mics. The tradeoff is between damaging the rock enclosure or having clearer audio not garbled by the running water of the exhibit. We may be able to hang mics from the ceiling, but that testing will be left to those who follow. It’s less of a crucial point right now, however, as we don’t have any way to automate audio processing.

Software development for facial recognition is progressing as our Media Macros contractors are heading to training on the new system they are building into our overall video analysis package. Hopefully we’ll have that in testing this next school year.

Eye-tracking is really ironed out, too. We have a couple more issues to figure out around tracking on the Magic Planet in particular, but otherwise even the stand-alone tracking is ready to go, and I have trained a couple folks on how to run studies. Between that and the manuals I compiled, hopefully that’s work that can continue without much lag and certainly without as much learning time as it took me to work out a lot of kinks.

Exhibit-wise, the wave tanks are all installed and getting put through their paces with the influx of end-of-year school groups. Maybe even starting to leak a little bit as the wear-and-tear kicks in. We are re-conceptualizing the climate change exhibit and haven’t started planning the remodeling of the remote-sensing exhibit room and Magic Planet. Those two should be up for real progress this year, too.

Beyond that, pending IRB approval due any day for the main video system, we should be very close to collecting research data. We planned a list of things that we need to look at for each of the questions in the grant, and there are pieces that the new researcher can get started on right away to start groundtruthing the use of video observations to study exhibits as well as answering questions about the build-and-test nature of the tsunami wave tank. We have also outlined a brief plan for managing the data as I mentioned a couple posts ago.

That makes this my last post as research assistant for the lab. Stay tuned; you’re guaranteed to hear from the new team soon. You might even hear from me as I go forth and test using the cameras from the other side of the country!

 

With IRB approval “just around the corner” (ha!), I’ve been making sure everything is in place so I can hit the ground running once I get the final approval.  That means checking back over my selection criteria for potential interviewees.  For anyone who doesn’t remember, I’m doing phone interviews with COASST citizen science volunteers to see how they describe science, resource management, and their role in each.

 

I had originally hoped to do some fancy cluster analyses to group people using the big pile of volunteer survey data I have.  How were people answering survey questions?  Does it depend on how long people are involved in the program, or how many birds they’ve identified?  … Nope. As far as I could tell, there were no patterns relevant to my research interests.

 

After a lot of digging through the survey data, I felt like I was back at square 1.  Shawn asked me, “Based on what you’re interested in, what information would you NEED to be able to sort people?”  My interview questions focus on people’s definitions of science and resource management, and their description of their role in COASST, science, and resource management.  I expect their responses have a lot to do with their world view, their experience with science, and what they think about the role of science in society.  Unfortunately, these questions were not included in the 2012 COASST volunteer survey.

 

As so often is the case, what I need and what I have are two different things.  When I looked through what I do have, there were several survey questions that are at least somewhat related to my research interest.  I’ve struggled with determining which questions are the most relevant.  Or I should say, I’ve struggled with making sure I’m not creating arbitrary groupings of volunteers and expecting those to hold through the analysis phase of my project.

 

This process of selecting interviewees off survey responses makes me excited to create my own surveys in the future!  That way I could specifically ask questions to help me create groupings.  Until then, I’m trying to make do with what I have!

While we don’t yet have the formal guest researcher program up and running, we did have a visit from our collaborator Jarrett Geenan this week. He’s working with Sigrid Norris on multimodal discourse analysis, and he was in the U.S. for an applied linguistics conference,  so he “stopped by” the Pacific Northwest on his way back from Dallas to New Zealand. Turns out his undergraduate and graduate work so far in English and linguistics is remarkably similar to Shawn’s. Several of the grad students working with Shawn managed to have lunch with him last week, and talk about our different research projects, and life as a grad student in the States vs. Canada (where he’s from), England (Laura’s homeland), and New Zealand.

We also had a chance to chat about the video cameras. He’s still been having difficulty downloading anything useful, as things just come in fits and starts. We’re not sure how the best way to go about diagnosing the issues will be (barring a trip for one of us to be there in person), but maybe we can get the Milestone folks on a screenshare or something. In the meantime, it led us to a discussion of what might be a larger issue, that of just collecting data all the time and overtaxing the system unnecessarily. It came up with the school groups – is it really that important to just have the cameras on constantly to get a proper, useful longitudinal record? We’re starting to think no, of course, and the problems Jarrett is having makes it more likely that we will think about just turning the cameras on when the VC is open using a scheduling function.

The other advantage is that this will give us like 16-18 hours a day to actually process the video data, too, if we can parse it so that the automated analysis that needs to be done to allow the customization of exhibits can be done in real-time. That would leave anything else, such as group association, speech analysis, and the other higher-order stuff for the overnight processing. We’ll have to work with our programmers to see about that.

In other news, it’s looking highly likely that I’ll be working on the system doing my own research when I graduate later this spring, so hopefully I’ll be able to provide that insider perspective having worked on it (extensively!) in person at Hatfield and then going away to finish up the research at my (new) home institution. That and Jarrett’s visit in person may be the kick-start we need to really get this into shape for new short-term visiting scholars.