I have been sitting in front of the computer today searching for creative ways to install potent microphones and camouflage them among the rocks of our live animal touch-tanks. Cyberlab cameras are up and running, and we have great views of the families’ interactions at many angles of our touch-tank exhibit. Once captured through our data collection tools, the families’ discourse can give researchers invaluable data about the visitor’s learning experience, meaning making and social interactions in the exhibit and among themselves. This is important data not only for evaluation purposes but also for learning research purposes as we strive to conceptualize learning in informal settings and contextualize its occurrence within new theoretical frameworks paying attention to contemporary mediating tools.

The problem we run into at the touch-tanks while trying to collect rich data is, of course, audio capture. Often, there are lots of people using the exhibits all at once, discussing among themselves and with the staff volunteers. There are lots of social exchanges between visiting groups, lots of excitement going on as people touch the animals on display, water background noise and all sorts of other noises incorporated in this rich experience. The camera mics are not good enough to clearly capture all the various dialogues efficiently; therefore, we are starting the process of installing new mics throughout the few access points of the touch tanks so that rich audio can accompany rich video data.

We will be working on installations in the next few weeks, and as soon as my IRB approval comes through (fingers crossed), I will hit the ground running with my own research, which will use the audio and video systems we have to collect the data for discourse analysis of family interactions at the tanks and the links to conservation dialogue. I will be recruiting families and working with them in a set of four activities at the touch tanks, collecting data through video observations, interviews and focus groups. I can’t wait to start but before that I need to dive into team and creative work to install these wonderful mics.

I will post a blog in the next few weeks with photos and updates on the process, and maybe your creative input may come right in handy 🙂

As I wind down the first year of my Master’s program, I have had a chance to reflect on the different accomplishments achieved within the Cyberlab, the classroom, and professionally.  I have had the chance to wear many hats beyond the typical “grad student” role.  For example, I have been a server administrator, sound engineer, exhibit maintenance support, logistics manager, and lab ambassador…to name a few.  So many different opportunities have led to new learning experiences that I had not anticipated.  As there is no manual for setting up a “Cyberlab,” I feel I have so much more insight now to share with other groups that may attempt this in their institution for learning research.

As of this week, 30 cameras have been installed around multiple exhibits to capture interactions and movement.  We now have great views of the octopus tank, the touch pools, wave tanks, the touchtable, touchwall, and Magic Planet.  The image included in this post is an example of one such view in our Rhythms Room.  Several cameras can be used to monitor the traffic flow and patterns as visitors circulate the center.  Our BlackFly and Flea (facial recognition) cameras recently came in, which creates unique issues with mounting these small pieces of technology.  We have enlisted the support of an engineer with access to a 3-D printer that can be used to custom build to our needs.  We hope to have these cameras installed within the next few weeks to begin testing the facial recognition capabilities.  More progress with each passing day.

rroom2

One of the Cyberlab cameras captures the Rhythms Room at Hatfield Marine Science Center.

Today I am heading to St. Paul, Minnesota, for the Science on a Sphere workshop at the Minnesota Science Museum.  As we have the Magic Planet exhibit (pictured above), a globe that displays different visualizations of environmental processes, this will be a chance to connect with other institutions that have this form of exhibit in a public space and talk about use and the direction of this technology.  I am excited for the chance to help represent the Cyberlab and showcase what is in place at Hatfield Marine Science Center to support other researchers around the country and world.  Hopefully we will meet some potential collaborators and new Cyberscholars.  I am also looking forward to visiting a science museum I have not been to before.  My perspective of the museum has changed, meaning that I often take a step back to analyze the exhibit and the interactions taking place around it.  I need to remind myself to also be a “visitor” as I will be wearing my researcher “hat” plenty this summer!

 

 

 

Spring Quarter is now upon us and with that there is plenty of “spring cleaning” to get done in the Cyberlab prior to the surge of visitors to Newport over the summer months.  For a free-choice learning geek like me, this period of data collection will be exciting as I work on my research for my graduate program.

The monitoring and maintenance of the audio and video recording devices continues!  Working with this technology is a great opportunity to troubleshoot and consider effective placement around exhibits.  I am getting more practice with camera installation and ensuring that data is being recorded and archived on our servers.  We are also thinking about how we can rapidly deploy cameras for guest researchers based on their project needs.  If other museums, aquariums, or science centers consider a similar method to collect audio and video data, I know we can offer insight as we continue to try things and re-adjust.  At this point I don’t take these collection methods for granted!  Reading through published visitor research projects, there was consideration for how to minimize the effect of an observer or a large camera recording nearby and how this influenced behavior.  Now cameras are smaller and can be mounted in ways that they blend in with the surroundings.  This helps us see more natural behaviors as people explore the exhibits.  This is important to me because I will be using the audio and video equipment to look for patterns of behavior around the multi-touch interactive tabletop exhibit.

Based on comments from our volunteers, the touchtable has received a lot of attention from visitors.  At this time we have a couple different programs installed on the table.  One program from Open Exhibits has content about the electromagnetic spectrum where users can drag an image of an object through the different sections of the spectrum, including infrared, visible, ultraviolet, and x-ray, while providing information about each category.  Another program is called Valcamonica, which has puzzles and content about prehistoric petroglyphs found in Northern Italy.  I am curious as to the conversations people are having around the table and whether they are verbalizing the content they see or how to use the technology.  If there are different ages within the group, is someone taking the role as the “expert” on how to use it?  Are they modeling and showing others how to navigate through the software?  Are visitors also spending time at other exhibits near the table?  There are live animal exhibits within 15 feet of the table and are they getting attention?  I am thinking about all of these questions as I design my research project that will be conducted this summer.  Which means…time to get back to work!

The touch table and touch wall have been in the visitor center about a month and it has been fascinating to watch the reaction to this technology.  Countless visitors have interacted with the Open Exhibits software displaying different science content and seem to have an interest in what this tool does.   Touch surfaces have become more common with regards to smartphones and tablets, but to see one the size of a coffee table is unique.  I started considering the ages of the users and their behavior directed towards this object.  For children and young adults, the touch technology is likely more familiar.  They were immediately drawn to it and appeared to have an idea about what types of gestures would allow image manipulation.

This week NPR had a feature on kids growing up with mobile technology, some considering them a “touch screen generation”.  One story included information about the amount of time children use touch surfaces such as smartphones and tablets.  The concept of “passive” screen time versus “active” screen time and the influence on baby and toddler development piqued my interest.  Passive screen time is compared to scrolling through photos, whereas active screen time is social and requiring more focused engagement.  Georgene Troseth, a developmental psychologist at Vanderbilt University, claims that a program like Skype allows for active social interaction, even if through a screen, and can help babies learn.  What could active screen time mean for learning about concepts such as science in a museum or aquarium setting?

The touch table and touch wall do allow for individual exploration and social engagement.  People walk up and investigate on their own, and then call their friends or family over.  Some users would initially discuss the technology and then the content of the software.  From limited observations, I noticed that some were commenting on “how cool” the touch table was and then reading the science content out loud to those around them.  Some users verbalized connections between the content and other personal experiences they have had.  The social element seems to happen naturally.  The challenge is creating dynamic and interactive software that can be a tool to supplement learning even if the stay time at the exhibit is brief.

Members of the Cyberlab were busy this week.  We set up the multi touch table and touch wall in the Visitors Center and hosted Kate Haley Goldman as a guest researcher.  In preparation for her visit, there were modifications to camera and table placement, tinkering with microphones, and testing the data collection pieces by looking at the video playback.  It was a great opportunity to evaluate our lab setup for other incoming researchers and their data collection needs, and to try things live with the technology of Ideum!

Kate traveled from Washington D.C. to collect data on the interactive content by Open Exhibits displayed on our table.  As the Principal of Audience Viewpoints, Kate conducts research on audiences and learning in museums and informal learning centers.  She is investigating the use of multi touch technology in these settings, and we are thankful for her insight as we implement this exhibit format at Hatfield Marine Science Center.

Watching the video playback of visitor interactions with Kate was fascinating.  We discussed flow patterns around the room based on table placement.  We looked at the amount of stay time at the table depending on program content.  As the day progressed, more questions came up.  How long were visitors staying at the other exhibits, which have live animals, versus the table placed nearby?  While they were moving about the room, would visitors return to the table multiple times?  What were the demographics of the users?  Were they bringing their social group with them?  What were the users talking about?  Was it the technology itself or the content on the table?  Was the technology intuitive to use?

I felt the thrill of the research process this weekend.  It was a wonderful opportunity to “observe the observer” and witness Kate in action.  I enjoyed seeing visitor use of the table and thinking about the interactions between humans and technology.  How effective is it to present science concepts in this format and are users learning something?  I will reflect on this experience as I design my research project around science learning and the use of multi touch technology in an informal learning environment such as Hatfield Marine Science Center.

Last week I returned a few purchased Cyberlab cameras back to the store.  Some were already taken off the exhibits and a couple others were just removed from the computer kiosks at the wave laboratory. Apparently they were not working well as images were coming through very blurry.

I wonder how much of the problem had to do with visitor interactions…WAIT…everything at a visitor center has to do with visitor interactions doesn’t it? The shape of the little camera stroked me as very inviting of the oily digits exploring the visitor center everyday. We all know visitors love to push buttons, so what happens when a camera placed at eye level at a computer kiosk looks like a button? … CORRECT, it gets pushed and pushed many times, and the finger oils get transferred to the lenses (that is a possibility). I can only imagine the puzzled looks of visitors waiting for something to happen, what would the “button” activate?

It didn’t activate anything but a little frustration on our prototyping side as we continue to seek optimal interfaces to obtain great quality video for our learning research goals while maintaining the aesthetically pleasing characteristic of the exhibits. Jenny East, Megan Kleibacker, Mark Farley and I walked around the visitor center to evaluate how many more cameras we need to buy and install keeping the old, new and oncoming exhibits in mind. How many more and what type of cameras to buy depended on the possible locations for hook ups, the surfaces available for mounting and the angles we need to capture images from.  Below is a VC camera map and a screen capture of the camera overview to give a better idea.

HMSC VC Camera Map

Screen Shot

While this is all a challenge to figure out, a bigger challenge is to find and/or create mounting mechanisms that are safe and look good. Camera encasing systems that minimize visitor touch and avoid any physical contact with the lenses. These will probably have to be custom built to fit every particular mounting location, at least that would be ideal.  But how do we make it functional? how do we make it blend within the exhibits and be aesthetically pleasing at the same time? It may seem easy to think about but not so easy to accomplish, at least not if you don’t have all the money in the world, and certainly not at the push of a button.

Nevertheless, with “patience in the process” as Jenny talked about in her blog last week, as well as practicing some “hard thinking” as Shawn discussed a few blogs ago, we will keep evolving through our camera set up, pushing all of the buttons technology allows us to push while working collaboratively to optimize the ways in which we can collect good data in the saga of understanding what really pushes the visitors’ curiosity buttons… towards ocean sciences.