Spring Quarter is now upon us and with that there is plenty of “spring cleaning” to get done in the Cyberlab prior to the surge of visitors to Newport over the summer months.  For a free-choice learning geek like me, this period of data collection will be exciting as I work on my research for my graduate program.

The monitoring and maintenance of the audio and video recording devices continues!  Working with this technology is a great opportunity to troubleshoot and consider effective placement around exhibits.  I am getting more practice with camera installation and ensuring that data is being recorded and archived on our servers.  We are also thinking about how we can rapidly deploy cameras for guest researchers based on their project needs.  If other museums, aquariums, or science centers consider a similar method to collect audio and video data, I know we can offer insight as we continue to try things and re-adjust.  At this point I don’t take these collection methods for granted!  Reading through published visitor research projects, there was consideration for how to minimize the effect of an observer or a large camera recording nearby and how this influenced behavior.  Now cameras are smaller and can be mounted in ways that they blend in with the surroundings.  This helps us see more natural behaviors as people explore the exhibits.  This is important to me because I will be using the audio and video equipment to look for patterns of behavior around the multi-touch interactive tabletop exhibit.

Based on comments from our volunteers, the touchtable has received a lot of attention from visitors.  At this time we have a couple different programs installed on the table.  One program from Open Exhibits has content about the electromagnetic spectrum where users can drag an image of an object through the different sections of the spectrum, including infrared, visible, ultraviolet, and x-ray, while providing information about each category.  Another program is called Valcamonica, which has puzzles and content about prehistoric petroglyphs found in Northern Italy.  I am curious as to the conversations people are having around the table and whether they are verbalizing the content they see or how to use the technology.  If there are different ages within the group, is someone taking the role as the “expert” on how to use it?  Are they modeling and showing others how to navigate through the software?  Are visitors also spending time at other exhibits near the table?  There are live animal exhibits within 15 feet of the table and are they getting attention?  I am thinking about all of these questions as I design my research project that will be conducted this summer.  Which means…time to get back to work!

Last week I returned a few purchased Cyberlab cameras back to the store.  Some were already taken off the exhibits and a couple others were just removed from the computer kiosks at the wave laboratory. Apparently they were not working well as images were coming through very blurry.

I wonder how much of the problem had to do with visitor interactions…WAIT…everything at a visitor center has to do with visitor interactions doesn’t it? The shape of the little camera stroked me as very inviting of the oily digits exploring the visitor center everyday. We all know visitors love to push buttons, so what happens when a camera placed at eye level at a computer kiosk looks like a button? … CORRECT, it gets pushed and pushed many times, and the finger oils get transferred to the lenses (that is a possibility). I can only imagine the puzzled looks of visitors waiting for something to happen, what would the “button” activate?

It didn’t activate anything but a little frustration on our prototyping side as we continue to seek optimal interfaces to obtain great quality video for our learning research goals while maintaining the aesthetically pleasing characteristic of the exhibits. Jenny East, Megan Kleibacker, Mark Farley and I walked around the visitor center to evaluate how many more cameras we need to buy and install keeping the old, new and oncoming exhibits in mind. How many more and what type of cameras to buy depended on the possible locations for hook ups, the surfaces available for mounting and the angles we need to capture images from.  Below is a VC camera map and a screen capture of the camera overview to give a better idea.

HMSC VC Camera Map

Screen Shot

While this is all a challenge to figure out, a bigger challenge is to find and/or create mounting mechanisms that are safe and look good. Camera encasing systems that minimize visitor touch and avoid any physical contact with the lenses. These will probably have to be custom built to fit every particular mounting location, at least that would be ideal.  But how do we make it functional? how do we make it blend within the exhibits and be aesthetically pleasing at the same time? It may seem easy to think about but not so easy to accomplish, at least not if you don’t have all the money in the world, and certainly not at the push of a button.

Nevertheless, with “patience in the process” as Jenny talked about in her blog last week, as well as practicing some “hard thinking” as Shawn discussed a few blogs ago, we will keep evolving through our camera set up, pushing all of the buttons technology allows us to push while working collaboratively to optimize the ways in which we can collect good data in the saga of understanding what really pushes the visitors’ curiosity buttons… towards ocean sciences.

I’ve plunged into the Free-Choice Learning Lab pool and now I am completely immersed in the world of cyberlearning!  As an incoming Marine Resource Management student, I am excited to support the efforts of Dr. Shawn Rowe and assist with the implementation of the cyberlab at Hatfield Marine Science Center (HMSC).  My work will be focused on the multi-touch table research platform that Katie and Harrison have previously blogged about.  This unique technology will provide an incredible opportunity to explore cyberlearning in an informal science setting.

Cyberlearning was a new term for me and the definition is still evolving between researchers, educators, and those in the technology field.  In 2008, the NSF Task Force on Cyberlearning initially defined the word as “the use of networked computing and communications technologies to support learning.”  A Cyberlearning Summit was held in January 2012 with 32 speakers giving TED-talk style presentations on topics that included digital learning using mobile technologies, collaborative knowledge-building through social networking, and scientific inquiry through online gameplay.  It was apparent how excited and passionate these speakers were on sharing their work and encouraging new methods for learning opportunities in different educational settings.

Blending emergent technology and educational content has sparked my imagination.  What could be possible for HMSC as a cyberlearning location?  It would be incredible to walk up to an exhibit and have the content personalized to my interests based on data collected from previous visits.  Is it possible for the exhibit to know that I was fascinated by the life in intertidal zone (based on my manual inputs or eye-tracking), and then present additional knowledge through an interactive game?  This game could simulate a tide pool and I would need to apply what I have previously learned to keep a digital sea creature avatar alive.  Then I could share my sea creature’s experience with my friends on social networking sites…hmmm.  So many research questions could come from this.  Exciting days are up ahead!

Having more time to do research, of course! With the pressures and schedules of classes over, students everywhere are turning to a dedicated stretch of research work, either on their own theses and dissertations, or for paid research jobs, or internships. That means, with Laura and I graduating, there should be a new student taking over the Cyberlab duties soon. However, the other thing that summer means is the final push to nail down funding for the fall, and thus, our replacement is not yet actually identified.

In the meantime, though, Laura and I have managed to do a pretty thorough soup-t0-nuts inventory of the lab’s progress over the last couple years for the next researchers to hopefully pick up and run with:

Technology: Cameras are pretty much in and running smoothly. Laura and I have worked a lot of the glitches out, and I think we have the installation down  to a relatively smooth system of placing a camera, aligning it, and installing it physically, then setting it up on the servers and getting it set for everyone’s use. I’ve got a manual down that I think spells out the process start to finish. We’ve also got expanded network capability coming in the form of our own switch, which should help traffic.

Microphones, however, are a different story. We are still torn between installing mics in our lovely stone exhibitry around the touch tanks or just going with what the cameras pick up with built-in mics. The tradeoff is between damaging the rock enclosure or having clearer audio not garbled by the running water of the exhibit. We may be able to hang mics from the ceiling, but that testing will be left to those who follow. It’s less of a crucial point right now, however, as we don’t have any way to automate audio processing.

Software development for facial recognition is progressing as our Media Macros contractors are heading to training on the new system they are building into our overall video analysis package. Hopefully we’ll have that in testing this next school year.

Eye-tracking is really ironed out, too. We have a couple more issues to figure out around tracking on the Magic Planet in particular, but otherwise even the stand-alone tracking is ready to go, and I have trained a couple folks on how to run studies. Between that and the manuals I compiled, hopefully that’s work that can continue without much lag and certainly without as much learning time as it took me to work out a lot of kinks.

Exhibit-wise, the wave tanks are all installed and getting put through their paces with the influx of end-of-year school groups. Maybe even starting to leak a little bit as the wear-and-tear kicks in. We are re-conceptualizing the climate change exhibit and haven’t started planning the remodeling of the remote-sensing exhibit room and Magic Planet. Those two should be up for real progress this year, too.

Beyond that, pending IRB approval due any day for the main video system, we should be very close to collecting research data. We planned a list of things that we need to look at for each of the questions in the grant, and there are pieces that the new researcher can get started on right away to start groundtruthing the use of video observations to study exhibits as well as answering questions about the build-and-test nature of the tsunami wave tank. We have also outlined a brief plan for managing the data as I mentioned a couple posts ago.

That makes this my last post as research assistant for the lab. Stay tuned; you’re guaranteed to hear from the new team soon. You might even hear from me as I go forth and test using the cameras from the other side of the country!

 

While we don’t yet have the formal guest researcher program up and running, we did have a visit from our collaborator Jarrett Geenan this week. He’s working with Sigrid Norris on multimodal discourse analysis, and he was in the U.S. for an applied linguistics conference,  so he “stopped by” the Pacific Northwest on his way back from Dallas to New Zealand. Turns out his undergraduate and graduate work so far in English and linguistics is remarkably similar to Shawn’s. Several of the grad students working with Shawn managed to have lunch with him last week, and talk about our different research projects, and life as a grad student in the States vs. Canada (where he’s from), England (Laura’s homeland), and New Zealand.

We also had a chance to chat about the video cameras. He’s still been having difficulty downloading anything useful, as things just come in fits and starts. We’re not sure how the best way to go about diagnosing the issues will be (barring a trip for one of us to be there in person), but maybe we can get the Milestone folks on a screenshare or something. In the meantime, it led us to a discussion of what might be a larger issue, that of just collecting data all the time and overtaxing the system unnecessarily. It came up with the school groups – is it really that important to just have the cameras on constantly to get a proper, useful longitudinal record? We’re starting to think no, of course, and the problems Jarrett is having makes it more likely that we will think about just turning the cameras on when the VC is open using a scheduling function.

The other advantage is that this will give us like 16-18 hours a day to actually process the video data, too, if we can parse it so that the automated analysis that needs to be done to allow the customization of exhibits can be done in real-time. That would leave anything else, such as group association, speech analysis, and the other higher-order stuff for the overnight processing. We’ll have to work with our programmers to see about that.

In other news, it’s looking highly likely that I’ll be working on the system doing my own research when I graduate later this spring, so hopefully I’ll be able to provide that insider perspective having worked on it (extensively!) in person at Hatfield and then going away to finish up the research at my (new) home institution. That and Jarrett’s visit in person may be the kick-start we need to really get this into shape for new short-term visiting scholars.