Yes, we failed to change the default password on the cameras we installed. Someone managed to get ahold of the IP addresses, and guess the login and password. We escaped with only minor headaches, as all that happened was that they uploaded a few “overlay” images that appeared on some of the camera feeds, and a few text messages that seemed to be mostly warning messages to us about cybersecurity.

The hacker did change a few of our passwords for the cameras, so there were some from which we could not just delete the images. This has meant various levels of hassle to reset the cameras to default. For the white brick cameras, 30 seconds of holding a control button while the power cycles was sufficient. I didn’t even have to reset the IP address. For the dome cameras, it’s a bit more complex, as the IP address has to be reset, and I wasn’t around for that part originally so I’ll have to consult IT.

However, it makes us wonder about the wisdom of having even the camera views available without a password on the web, which we hadn’t considered was available before. You’d have to have the IP address to go to the view, but once you were there, our IP addresses are mostly sequential (depending on the day and which cameras are installed), so you could go visit each of them if you liked. There seems to be an option to turn this off, however, which I have also gone through and switched so that now you need not only the IP address, but the username and password in order to even view the feed.

Moral of this part of the story? Explore the default settings and consider what they truly mean. Be a Nervous Nellie and a bit of a cynic, assume the worst so you can plan for it.

UPDATE 5/16/13: I couldn’t get the 3301 dome cameras reset despite following the unplug, hold control button, re-plug power sequence. Our IT specialist thinks the hacker may have actually reset the default password via the firmware, since they should have automatically reset themselves to the same IP addresses using DHCP. So those two cameras have been pulled and replaced while the hacked ones are off to the IT hospital for some sleuthing and probably a firmware reset as well. I’ll let you know what the resolution is.

While we don’t yet have the formal guest researcher program up and running, we did have a visit from our collaborator Jarrett Geenan this week. He’s working with Sigrid Norris on multimodal discourse analysis, and he was in the U.S. for an applied linguistics conference,  so he “stopped by” the Pacific Northwest on his way back from Dallas to New Zealand. Turns out his undergraduate and graduate work so far in English and linguistics is remarkably similar to Shawn’s. Several of the grad students working with Shawn managed to have lunch with him last week, and talk about our different research projects, and life as a grad student in the States vs. Canada (where he’s from), England (Laura’s homeland), and New Zealand.

We also had a chance to chat about the video cameras. He’s still been having difficulty downloading anything useful, as things just come in fits and starts. We’re not sure how the best way to go about diagnosing the issues will be (barring a trip for one of us to be there in person), but maybe we can get the Milestone folks on a screenshare or something. In the meantime, it led us to a discussion of what might be a larger issue, that of just collecting data all the time and overtaxing the system unnecessarily. It came up with the school groups – is it really that important to just have the cameras on constantly to get a proper, useful longitudinal record? We’re starting to think no, of course, and the problems Jarrett is having makes it more likely that we will think about just turning the cameras on when the VC is open using a scheduling function.

The other advantage is that this will give us like 16-18 hours a day to actually process the video data, too, if we can parse it so that the automated analysis that needs to be done to allow the customization of exhibits can be done in real-time. That would leave anything else, such as group association, speech analysis, and the other higher-order stuff for the overnight processing. We’ll have to work with our programmers to see about that.

In other news, it’s looking highly likely that I’ll be working on the system doing my own research when I graduate later this spring, so hopefully I’ll be able to provide that insider perspective having worked on it (extensively!) in person at Hatfield and then going away to finish up the research at my (new) home institution. That and Jarrett’s visit in person may be the kick-start we need to really get this into shape for new short-term visiting scholars.

Awhile ago, I promised to share some of my experiences in collecting data on visitors’ exhibit use as part of this blog. Now that I’ve actually been back at it for the past few weeks, I thought it might be time to actually share what I’ve found. As it is winter here in the northern hemisphere, our weekend visitation to the Hatfield Visitor Center is generally pretty low. This means I have to time my data collection carefully if I don’t want to spend an entire day waiting for subjects and maybe only collect data on two people. That’s what happened on a Sunday last month; the weather on the coast was lovely, and visitation was minimal. I have been recently collecting data in our Rhythms of the Coastal Waters exhibit, which has additional data collection challenges in that it is basically the last thing people might see before they leave the center, it’s dim because it houses the projector-based Magic Planet, and there are no animals, unlike just about every other corner of the Visitor Center. So, I knocked off early and went to the beach. Then I definitely rescheduled another day I was going to collect data because it was a sunny weekend day at the coast.

On the other hand, on a recent Saturday we hosted our annual Fossil Fest. While visitation was down from previous years, only about 650 compared to 900, this was plenty for me, and I was able to collect data on 13 people between 11:30 and 3:30, despite an octopus feeding and a lecture by our special guest fossil expert. Considering data collection, including recruitment, consent, the experiment, and debrief probably runs 15 minutes, I thought that this was a big win. In addition, I only got one refusal from a group that said they were on their way out and didn’t have time. It’s amazing how much better things go if you a) lead with “I’m a student doing research,” b) mention “it will only take about 5-10 minutes”, and c) don’t record any video of them. I suspect it also helps that it’s not summer, as this crowd is more local and thus perhaps more invested in improving the center, whereas summer tourists might be visiting more for the experience, to say they’ve been there, as John Falk’s museum visitor “identity” or motivation research would suggest. This would seem to me like a motivation that would not make you all that eager to participate. Hm, sounds like a good research project to me!

Another reason I suspect things went well was that I am generally approaching only all-adult groups, and I only need one participant from each group, so someone can watch the kids if they get bored. I did have one grandma get interrupted a couple times, though, by her grandkids, but she was a trooper and shooed them away while she finished. When I was recording video and doing interviews about the Magic Planet, the younger kids in the group often got bored, which made recruiting families and getting good data somewhat difficult, though I didn’t have anyone quit early once they agreed to participate. Also, as opposed to prototyping our salmon forecasting exhibit, I wasn’t asking people to sit down at a computer and take a survey, which seemed to feel more like a test to some people. Or it could have been the exciting new technology I was using, the eye-tracker, that was appealing to some.

Interestingly, I also had a lot of folks observe their partners as the experiment happened, rather than wander off and meet up later, which happened more with the salmon exhibit prototyping, perhaps because there was not much to see if one person was using the exhibit. With the eye-tracking and the Magic Planet, it was still possible to view the images on the globe because it is such a large exhibit. Will we ever solve the mystery of what makes the perfect day for data collection? Probably not, but it does present a good opportunity for reflection on what did and didn’t seem to work to get the best sample of your visitorship. The cameras we’re installing are of course intended to shed some light on how representative these samples are.

What other influences have you seen that affect whether you have a successful or slow day collecting exhibit use data?

 

The wave tank area was the latest to get its cameras rejiggered and microphones installed for testing, now that the permanent wave tanks are installed. Laura and I had a heck of a time logging in to the cameras to see their online feeds and hear the mics, however. So we did some troubleshooting, since we were using a different laptop for viewing over the web this time, and came up with these browser-related tips for viewing your AXIS camera live feeds through web browsers (when you type the camera’s IP address straight into the address bar of the browser, not when you’re viewing through Milestone software):

When you reach the camera page (after inputting username and password), go to “Setup” in the top menu bar, then “Live View Config” on the left-hand menu:

First, regardless of operating system, set the Stream Profile drop-down to H.264 (this doesn’t affect or matter to what you have set for recording through Milestone, by the way – see earlier posts about server load), and then Default viewer to “AMC” for Windows IE, and “Server Push” for Other Browsers.

Then, to set up your computer:

Windows PCs:
Chrome: You’ll need to install Apple’s QuickTime once for the browser, and then authorize QuickTime for each camera (use the same username and password as when just logging into the camera)
Internet Explorer: you’ll have to install the AXIS codec once you go to the camera page (which may require various ActiveX permissions and other security changes to Windows defaults)
Firefox: Same as for Chrome, since it uses QuickTime, too
Safari: we don’t recommend using Safari on Windows

Mac:

Chrome: QuickTime needs to be installed for Chrome

Firefox: Needs QuickTime installed

Safari: Should be good to go

IE:  Not recommended on a Mac

Basically, we’ve gone to using Chrome whenever we can since it seems to work the best across Windows and Macs both, but if you have a preference for another browser, these options should get both your video and your audio enabled. And hopefully save you a lot of frustration of thinking you installed the hardware wrong …

With all the new wave exhibit work, visitor center maintenance, server changes and audio testing that has been going on in the last few months, Mark, Katie and I realized that the Milestone system that runs the cameras and stores the video data is in need of a little TLC.

Next week we will be relabeling cameras, tidying up the camera “views” (customized display of the different camera views), and checking the servers. We’ve also been having a few problems with exporting video using a codec that allows the video to be played on other media players outside the Milestone client, so we’re going to attempt to solve that issue too. Basically we have a bit of camera housekeeping to attend to – but a good tidy up and reorganize is always a positive way to start the new year me thinks!

Before the holidays, Mark had also asked me to try out the newly released Axis network covert camera – which although video only, is much smaller and discreet than our dome counterparts, and may be more useful for establishment angles, i.e. camera views that establish a wider view of an area (such as a birds eye view), and don’t necessarily require audio. With the updated wave tanks going in, I temporarily installed one on one of the wave kiosks to test view and video quality. During the camera housekeeping, I’m going to take a closer look at its performance to determine whether we will obtain and install more. They may end up replacing some of the dome cameras so we can free those up for views that require closer angles and more detailed views/audio.

Source: axis.com via Free-Choice on Pinterest

 

We’ve recently been prototyping a new exhibit with standard on-the-ground methods, and now we’re going to use the cameras to do a sort of reverse ground-truthing. Over our busy Whale Watch Week between Christmas and New Year’s, Laura set up a camera on the exhibit to collect data on people using the exhibit at times when we didn’t have an observer in place. So in this case, instead of ground-truthing the cameras, we’re sort of doing the opposite, and checking what we found with the in-person observer.

However, the camera will be on at the same time that the researcher is there, too. It almost sounds like we’ll be spying on our researcher and “checking up,” but it will be an interesting check of both our earlier observations without the camera in place, as well as a chance to observe a) people using the new exhibit without a researcher in place, b) people using it *with* a researcher observing them (and maybe noticing the observer, or possibly not), and c) whether people behave differently as well as how much we can capture with a different camera angle than the on-the-ground observer will have.

Some expectations:

The camera should have the advantage of replay which the in-person observer won’t, so we can get an idea of how much might be missed, especially detail-wise.

The camera audio might be better than a researcher standing a ways away, but as our earlier blog posts have mentioned, the audio testing is very much a work in progress.

The camera angle, especially since it’s a single, fixed camera at this point, will be worse than the flexible researcher-in-place, as it will be at a higher angle, and the visitors may block what they’re doing a good portion of the time.

 

As we go forward and check the automated collection of our system with in-place observers, rather than the other way around, these are the sorts of things we’ll be checking for, advantages and disadvantages.

What else do you all expect the camera might provide better or worse than a in-person researcher?