Members of the Cyberlab were busy this week.  We set up the multi touch table and touch wall in the Visitors Center and hosted Kate Haley Goldman as a guest researcher.  In preparation for her visit, there were modifications to camera and table placement, tinkering with microphones, and testing the data collection pieces by looking at the video playback.  It was a great opportunity to evaluate our lab setup for other incoming researchers and their data collection needs, and to try things live with the technology of Ideum!

Kate traveled from Washington D.C. to collect data on the interactive content by Open Exhibits displayed on our table.  As the Principal of Audience Viewpoints, Kate conducts research on audiences and learning in museums and informal learning centers.  She is investigating the use of multi touch technology in these settings, and we are thankful for her insight as we implement this exhibit format at Hatfield Marine Science Center.

Watching the video playback of visitor interactions with Kate was fascinating.  We discussed flow patterns around the room based on table placement.  We looked at the amount of stay time at the table depending on program content.  As the day progressed, more questions came up.  How long were visitors staying at the other exhibits, which have live animals, versus the table placed nearby?  While they were moving about the room, would visitors return to the table multiple times?  What were the demographics of the users?  Were they bringing their social group with them?  What were the users talking about?  Was it the technology itself or the content on the table?  Was the technology intuitive to use?

I felt the thrill of the research process this weekend.  It was a wonderful opportunity to “observe the observer” and witness Kate in action.  I enjoyed seeing visitor use of the table and thinking about the interactions between humans and technology.  How effective is it to present science concepts in this format and are users learning something?  I will reflect on this experience as I design my research project around science learning and the use of multi touch technology in an informal learning environment such as Hatfield Marine Science Center.

Having more time to do research, of course! With the pressures and schedules of classes over, students everywhere are turning to a dedicated stretch of research work, either on their own theses and dissertations, or for paid research jobs, or internships. That means, with Laura and I graduating, there should be a new student taking over the Cyberlab duties soon. However, the other thing that summer means is the final push to nail down funding for the fall, and thus, our replacement is not yet actually identified.

In the meantime, though, Laura and I have managed to do a pretty thorough soup-t0-nuts inventory of the lab’s progress over the last couple years for the next researchers to hopefully pick up and run with:

Technology: Cameras are pretty much in and running smoothly. Laura and I have worked a lot of the glitches out, and I think we have the installation down  to a relatively smooth system of placing a camera, aligning it, and installing it physically, then setting it up on the servers and getting it set for everyone’s use. I’ve got a manual down that I think spells out the process start to finish. We’ve also got expanded network capability coming in the form of our own switch, which should help traffic.

Microphones, however, are a different story. We are still torn between installing mics in our lovely stone exhibitry around the touch tanks or just going with what the cameras pick up with built-in mics. The tradeoff is between damaging the rock enclosure or having clearer audio not garbled by the running water of the exhibit. We may be able to hang mics from the ceiling, but that testing will be left to those who follow. It’s less of a crucial point right now, however, as we don’t have any way to automate audio processing.

Software development for facial recognition is progressing as our Media Macros contractors are heading to training on the new system they are building into our overall video analysis package. Hopefully we’ll have that in testing this next school year.

Eye-tracking is really ironed out, too. We have a couple more issues to figure out around tracking on the Magic Planet in particular, but otherwise even the stand-alone tracking is ready to go, and I have trained a couple folks on how to run studies. Between that and the manuals I compiled, hopefully that’s work that can continue without much lag and certainly without as much learning time as it took me to work out a lot of kinks.

Exhibit-wise, the wave tanks are all installed and getting put through their paces with the influx of end-of-year school groups. Maybe even starting to leak a little bit as the wear-and-tear kicks in. We are re-conceptualizing the climate change exhibit and haven’t started planning the remodeling of the remote-sensing exhibit room and Magic Planet. Those two should be up for real progress this year, too.

Beyond that, pending IRB approval due any day for the main video system, we should be very close to collecting research data. We planned a list of things that we need to look at for each of the questions in the grant, and there are pieces that the new researcher can get started on right away to start groundtruthing the use of video observations to study exhibits as well as answering questions about the build-and-test nature of the tsunami wave tank. We have also outlined a brief plan for managing the data as I mentioned a couple posts ago.

That makes this my last post as research assistant for the lab. Stay tuned; you’re guaranteed to hear from the new team soon. You might even hear from me as I go forth and test using the cameras from the other side of the country!


Just a few quick updates on this holiday about how lab is progressing.

-We’re re-thinking our microphone options for the touch tanks. We’re reluctant to drill into our permanent structure to run wires, so we’re back to considering whether the in-camera microphones will be sufficient or whether we can put in wireless mics. With the placement of the cameras to get a wide angle for the interactions and the loud running water, the in-camera mics will probably be too far away for clear audio pickup, but the wireless mics require their own receivers and audio channels. The number of mics we’d want to install could rapidly exceed the amount of frequency space available. Oh, and there’s the whole splashing water issue – mics are not generally waterproof.

-We finally got a lot more internet bandwidth installed, but now we have to wait for on-campus telecommunications to install a switch. We’re creeping ever closer … and once that’s done, we can hopefully re-up the frame rate on our cameras. Hopefully we’ll also be able to export footage more easily, especially remotely, as well. I’ll be testing this out myself as part of my new job and future research.

-I installed a NAS network drive of five 2 TB hard drives that will probably be our backup system. It needed about 40 hours to configure itself, so next week we should hopefully be able to get it fully in place.

-We took the whole system down for about an hour to replace the UPS as the old one was just shot.

-We’re looking into scheduling to accommodate school groups that don’t give permission for taping, as well as evening events. This should be possible through the Milestone Management software, but it’s not something we’ve explored yet.

-Remote desktop access to the servers is next (hopefully). This is also waiting on the campus telecom network switch.

-We’re migrating our exhibit software from Flash to HTML 5 in order to be more easily updated as well as incorporating the key/screen press logging code.

Happy Memorial Day!

Yes, we failed to change the default password on the cameras we installed. Someone managed to get ahold of the IP addresses, and guess the login and password. We escaped with only minor headaches, as all that happened was that they uploaded a few “overlay” images that appeared on some of the camera feeds, and a few text messages that seemed to be mostly warning messages to us about cybersecurity.

The hacker did change a few of our passwords for the cameras, so there were some from which we could not just delete the images. This has meant various levels of hassle to reset the cameras to default. For the white brick cameras, 30 seconds of holding a control button while the power cycles was sufficient. I didn’t even have to reset the IP address. For the dome cameras, it’s a bit more complex, as the IP address has to be reset, and I wasn’t around for that part originally so I’ll have to consult IT.

However, it makes us wonder about the wisdom of having even the camera views available without a password on the web, which we hadn’t considered was available before. You’d have to have the IP address to go to the view, but once you were there, our IP addresses are mostly sequential (depending on the day and which cameras are installed), so you could go visit each of them if you liked. There seems to be an option to turn this off, however, which I have also gone through and switched so that now you need not only the IP address, but the username and password in order to even view the feed.

Moral of this part of the story? Explore the default settings and consider what they truly mean. Be a Nervous Nellie and a bit of a cynic, assume the worst so you can plan for it.

UPDATE 5/16/13: I couldn’t get the 3301 dome cameras reset despite following the unplug, hold control button, re-plug power sequence. Our IT specialist thinks the hacker may have actually reset the default password via the firmware, since they should have automatically reset themselves to the same IP addresses using DHCP. So those two cameras have been pulled and replaced while the hacked ones are off to the IT hospital for some sleuthing and probably a firmware reset as well. I’ll let you know what the resolution is.

While we don’t yet have the formal guest researcher program up and running, we did have a visit from our collaborator Jarrett Geenan this week. He’s working with Sigrid Norris on multimodal discourse analysis, and he was in the U.S. for an applied linguistics conference,  so he “stopped by” the Pacific Northwest on his way back from Dallas to New Zealand. Turns out his undergraduate and graduate work so far in English and linguistics is remarkably similar to Shawn’s. Several of the grad students working with Shawn managed to have lunch with him last week, and talk about our different research projects, and life as a grad student in the States vs. Canada (where he’s from), England (Laura’s homeland), and New Zealand.

We also had a chance to chat about the video cameras. He’s still been having difficulty downloading anything useful, as things just come in fits and starts. We’re not sure how the best way to go about diagnosing the issues will be (barring a trip for one of us to be there in person), but maybe we can get the Milestone folks on a screenshare or something. In the meantime, it led us to a discussion of what might be a larger issue, that of just collecting data all the time and overtaxing the system unnecessarily. It came up with the school groups – is it really that important to just have the cameras on constantly to get a proper, useful longitudinal record? We’re starting to think no, of course, and the problems Jarrett is having makes it more likely that we will think about just turning the cameras on when the VC is open using a scheduling function.

The other advantage is that this will give us like 16-18 hours a day to actually process the video data, too, if we can parse it so that the automated analysis that needs to be done to allow the customization of exhibits can be done in real-time. That would leave anything else, such as group association, speech analysis, and the other higher-order stuff for the overnight processing. We’ll have to work with our programmers to see about that.

In other news, it’s looking highly likely that I’ll be working on the system doing my own research when I graduate later this spring, so hopefully I’ll be able to provide that insider perspective having worked on it (extensively!) in person at Hatfield and then going away to finish up the research at my (new) home institution. That and Jarrett’s visit in person may be the kick-start we need to really get this into shape for new short-term visiting scholars.