Last week, I talked about our eye-tracking in the science center at the Museums and the Web 2013 conference, as part of a track on Evaluating the Museum. This was the first time I’d attended this conference, and it turned out to be very different from others I’d attended. This, I think, meant that eye-tracking was a little ahead of where the audience of the conference was in some ways and behind in others!

Many of the attendees seemed to be from the art museum world, which has some different and some similar issues to those of science centers – we each have our generally separate professional organizations (American Association of Museums) and (Association of Science and Technology Centers). In fact, the opening plenary speaker, Larry Fitzgerald, made the point that museums should be thinking of ways that they can distinguish themselves from formal schools. He suggested that a lot of the ways museums are currently trying to get visitors to “think” look very much like they ways people think in schools, rather than the ways people think “all the time.” He mentioned “discovery centers” (which I took to mean interactive science centers), as places that are already trying to leverage the ways people naturally think (hmm, free-choice learning much?).

The twitter reaction and tone of other presentations made me think that this was actually a relatively revolutionary idea for a lot of folks there. My sense is that probably that stems from a different institutional culture that prevents much of that, except for places like Santa Cruz Museum of Art and History, where Nina Simon is re-vamping the place around participation of community members.

So, overall, eye-tracking and studying what our visitors do was also a fairly foreign concept; one tweet wondered whether a museum’s mission needed to be visitor-centric. Maybe museums that don’t have to rely on ticket sales can rest on that, but the conference was trying to push a bit that museums are changing, away from places where people come to find the answer, or the truth and instead to be places of participation. That means some museums may also be generally lagging the idea of getting funding to study visitors at all, let alone spending large amounts on “capital” equipment, and since eye-trackers are expensive technologies designed basically only for that purpose, it seemed just a little ahead of where some of the conference participants were. I’ll have to check back in a few years and see h0w things are changing. As we talked about in our lab meeting this morning, a lot of diversity work in STEM free-choice learning is happening not in academia, but in (science) museums. Maybe that will change in a few years, as well, as OSU continues to shape its Science and Mathematics Education faculty and graduate programs.

While we don’t yet have the formal guest researcher program up and running, we did have a visit from our collaborator Jarrett Geenan this week. He’s working with Sigrid Norris on multimodal discourse analysis, and he was in the U.S. for an applied linguistics conference,  so he “stopped by” the Pacific Northwest on his way back from Dallas to New Zealand. Turns out his undergraduate and graduate work so far in English and linguistics is remarkably similar to Shawn’s. Several of the grad students working with Shawn managed to have lunch with him last week, and talk about our different research projects, and life as a grad student in the States vs. Canada (where he’s from), England (Laura’s homeland), and New Zealand.

We also had a chance to chat about the video cameras. He’s still been having difficulty downloading anything useful, as things just come in fits and starts. We’re not sure how the best way to go about diagnosing the issues will be (barring a trip for one of us to be there in person), but maybe we can get the Milestone folks on a screenshare or something. In the meantime, it led us to a discussion of what might be a larger issue, that of just collecting data all the time and overtaxing the system unnecessarily. It came up with the school groups – is it really that important to just have the cameras on constantly to get a proper, useful longitudinal record? We’re starting to think no, of course, and the problems Jarrett is having makes it more likely that we will think about just turning the cameras on when the VC is open using a scheduling function.

The other advantage is that this will give us like 16-18 hours a day to actually process the video data, too, if we can parse it so that the automated analysis that needs to be done to allow the customization of exhibits can be done in real-time. That would leave anything else, such as group association, speech analysis, and the other higher-order stuff for the overnight processing. We’ll have to work with our programmers to see about that.

In other news, it’s looking highly likely that I’ll be working on the system doing my own research when I graduate later this spring, so hopefully I’ll be able to provide that insider perspective having worked on it (extensively!) in person at Hatfield and then going away to finish up the research at my (new) home institution. That and Jarrett’s visit in person may be the kick-start we need to really get this into shape for new short-term visiting scholars.

Every week we have two lab meetings, and during both we need to use online conferencing software. We’ve been at this for over a year, and in all that time we’ve only managed to find a free software that is marginally acceptable (Google Hangouts). I know that part of our problem is the limited bandwidth on the OSU campus, because when classes are out our problems are fewer, but even with adequate bandwidth we still can’t seem to get it to work well. Feedback, frozen video, plugins that stop working.  It’s frustrating, and every meeting we lose at least 15 minutes to technical issues.

Someone in the lab commented one day that we always seem to be about a year ahead of software development in our need. Online meetings, exhibit set ups, survey software. Every time we need something, we end up cobbling something together. I’ve decided to take these opportunities as character building and a testament to our skills and talent. Still, it’d be nice to spend time on something else once in a while.

 

The wave tank area was the latest to get its cameras rejiggered and microphones installed for testing, now that the permanent wave tanks are installed. Laura and I had a heck of a time logging in to the cameras to see their online feeds and hear the mics, however. So we did some troubleshooting, since we were using a different laptop for viewing over the web this time, and came up with these browser-related tips for viewing your AXIS camera live feeds through web browsers (when you type the camera’s IP address straight into the address bar of the browser, not when you’re viewing through Milestone software):

When you reach the camera page (after inputting username and password), go to “Setup” in the top menu bar, then “Live View Config” on the left-hand menu:

First, regardless of operating system, set the Stream Profile drop-down to H.264 (this doesn’t affect or matter to what you have set for recording through Milestone, by the way – see earlier posts about server load), and then Default viewer to “AMC” for Windows IE, and “Server Push” for Other Browsers.

Then, to set up your computer:

Windows PCs:
Chrome: You’ll need to install Apple’s QuickTime once for the browser, and then authorize QuickTime for each camera (use the same username and password as when just logging into the camera)
Internet Explorer: you’ll have to install the AXIS codec once you go to the camera page (which may require various ActiveX permissions and other security changes to Windows defaults)
Firefox: Same as for Chrome, since it uses QuickTime, too
Safari: we don’t recommend using Safari on Windows

Mac:

Chrome: QuickTime needs to be installed for Chrome

Firefox: Needs QuickTime installed

Safari: Should be good to go

IE:  Not recommended on a Mac

Basically, we’ve gone to using Chrome whenever we can since it seems to work the best across Windows and Macs both, but if you have a preference for another browser, these options should get both your video and your audio enabled. And hopefully save you a lot of frustration of thinking you installed the hardware wrong …

Our climate change “exhibit” is rapidly losing its primacy as an exhibit on which we do research to instead becoming a  research platform that we set up as an exhibit. The original plan was to design an exhibit on a multitouch table around climate change and research, among other things, how users interact and what stories they choose to tell as related to their “6 Americas” identity about climate change.

After Mark attended the ASTC conference, in talking with Ideum folks and others, we’ve decided what we really need to build is a research platform on the table, with exhibit content just as the vehicle for doing that research. That means instead of designing content and asking research questions about it, we’re taking the approach of proposing the research questions, then finding content to put on that allows us to investigate those questions. The good news is that a lot of content already exists.

So, with that in mind, we’re taking the tack now of identifying the research questions we’re interested in in order to build the appropriate tools for answering those questions. For example,

-How do people respond to the table, and what kinds of tools do we need to build so that they will respond, especially by creating their own narratives about the content?

-How can we extend the museum’s reach beyond the building itself, for example, by integrating the multitouch exhibit and handheld tools? What is the shelf life of interactions in the museum?

-What are the differences between the ways groups and individuals use the table, or the differences between the horizontal interactions of the table-based exhibit vs. the more traditional “vertical” interactions provided by other exhibits (did you play Ms. Pac Man differently when it was in the table version vs. the stand-up kiosk?)

-How can we help facilitate visualization understanding through simulations on the table where visitors can build comparisons and manipulate factors in the data to create their own images and animations?

What other questions with the multitouch table should we build research tools to answer?

 

 

 

 

 

With all the new wave exhibit work, visitor center maintenance, server changes and audio testing that has been going on in the last few months, Mark, Katie and I realized that the Milestone system that runs the cameras and stores the video data is in need of a little TLC.

Next week we will be relabeling cameras, tidying up the camera “views” (customized display of the different camera views), and checking the servers. We’ve also been having a few problems with exporting video using a codec that allows the video to be played on other media players outside the Milestone client, so we’re going to attempt to solve that issue too. Basically we have a bit of camera housekeeping to attend to – but a good tidy up and reorganize is always a positive way to start the new year me thinks!

Before the holidays, Mark had also asked me to try out the newly released Axis network covert camera – which although video only, is much smaller and discreet than our dome counterparts, and may be more useful for establishment angles, i.e. camera views that establish a wider view of an area (such as a birds eye view), and don’t necessarily require audio. With the updated wave tanks going in, I temporarily installed one on one of the wave kiosks to test view and video quality. During the camera housekeeping, I’m going to take a closer look at its performance to determine whether we will obtain and install more. They may end up replacing some of the dome cameras so we can free those up for views that require closer angles and more detailed views/audio.

Source: axis.com via Free-Choice on Pinterest