We received a very informative site evaluation from a surveillance camera expert today. Among other things, we learned that we don’t need pan-tilt-zoom functionality for most of our applications. With some clever placement, our exhibit space may not be as difficult to capture as we previously thought.

One problem—from the perspective of face detection and recognition—is the fact that many exhibits are along the walls of the Visitor Center. This is a common feature of many museums, but it means that simply throwing a camera into a convenient corner might be ineffective unless we acquire software to recognize back-of-head expressions. Our other exhibits, such as the touch pool and large display tanks, could block our view of visitors from several angles. These inherent problems can be circumvented with careful camera placement and a front-door camera to register each visitor upon entry. It certainly helps to bring in the professionals for this sort of thing.

Our Octocam – the underwater web cam in our Giant Pacific Octopus tank – has gone through various iterations as they serially succumb to seawater exposure. Our current camera is not adequate in image or tank stability. We spent some time today experimenting with a new camera mounted outside the tank, and it actually works just as well as a submersible camera. Naturally, an external camera would also eliminate concerns about maintenance, housing integrity, running electrical and ethernet cables underwater and the fact that metal is toxic to octopuses should a waterproof component fail. The camera will still be exposed to occasional bumping and climbing, albeit by creatures of a different phylum.

It seems product research is much like any research; the deeper you go, instead of finding a clear-cut answer, the more questions you come up with. So far, we have tabletop systems, modular systems that go from tabletop to working with fMRI systems (with their gigantic magnets that might humble other camera setups), or head-mounted systems that can accommodate viewers looking close up as well as far away, but no single system that does all three. The fMRI-compatible systems seem to be the most expensive, and that functionality is definitely our least-required.

Eye-tracking on the sphere or with natural exhibit use seem to be the biggest stumbling blocks for the technology so far. The tabletop systems are designed to allow left-to-right head movement, and maybe a bit of front-to-back, but not variable depth, such as when one might be looking at the far wall of the 6-foot-wide wave tank vs. the wall one is leaning on. Plus, they don’t follow the user as she might move around the sphere. The glasses-mounted systems can go with the user to a point, but not all can do so without external tracking points to pre-define an area of interest. One promising (for our purposes) head-mounted system does video the scene the user is looking at as he moves. I haven’t figured out yet whether it would work well on a close-up screen, where we could track a user’s kiosk navigation, for example. Another big open question is just how long will visitors agree to wear these contraptions??

The other questions I am really trying to press on are the warranty and tech support. I have been stuck with fancy-schmancy shiny new technology that has a) relatively little thought behind the actual user experience at least for our applications, b) tech support that is so new that they can hardly figure out what you’re trying to do with the product, or (worst) c) both problems. The good news with the eye tracking systems is that most seem to be provided at this point by companies that have been around for a while and might even have gone through several rounds of increasingly better products. The bad news is, since we may be going out on a limb with some of our intended uses, I might end up where I was before, with people that don’t understand what I’m trying to do. This is the curse of trying to do something different and new with the equipment, rather than just applying it for the same use in another subject, I suppose. However, I have seen updated versions of these products down the road, so I guess my struggles have not been for naught, at least from the users down the line from me. Maybe I’m just destined to be an alpha-tester.

One big help has come, as usual, from the museum community, fellow innovators that you all are. A post to the ASTC list has yielded a couple of leads on how these products are being used, which products were chosen and why, and most importantly, customer satisfaction with the products and the tech support. I’ve myself been called upon to advise on the use of the particular technologies I talked about above, and I definitely gave it to people straight.

This post by Nina Simon raises some great, eternal questions about visitor engagement.  Free-choice learning, by definition, entails agency on the part of the learner.  What’s the best way to allow and inspire visitors to provide input and exercise that agency?  Simon states the problem well:

“The fundamental question here is how we balance different modes of audience engagement. You could argue that visitors are more “engaged” by an activity that invites inquiry-based participation than one that invites them to read a label, even if they never get answers to their questions. Or, you could argue that this kind of active engagement should be secondary to sharing information, which can be more efficiently communicated by a label.”

In other news, the Magic Planet has finished its first round of upgrades.  We’ll be doing more work with it next week.

Today we met with a consulting engineer to puzzle out the basics of our wave tank. We’ll use the wave tank for two main purposes: modeling tsunami damage and demonstrating wave energy buoys. This means we’ll need to create both breaking waves and swells. This may entail two tanks or a convertible system of some sort.

The wave energy element of the exhibit will use working scale-model wave generators with LED lights to show the output. What better way to demonstrate wave energy than to actually let visitors produce it and see the results? We’ll be able to use this setup to host student design challenges, wherein participants engineer and test their generator arrays for power and efficiency.

We expect visitors (and ourselves) to have a lot of fun with the tsunami modeling aspect of the wave tank. This will feature scale-model buildings and a shore on which waves can break. We’re still exploring the design possibilities. This part of the exhibit will also lend itself to design challenges, as visitors and students will create buildings to test their tsunami resistance.

Tsunami modeling has immediate implications for a town like Newport, which sits right next to an offshore fault. Here at HMSC, we’re at sea level. Regular drills and the presence of emergency supply “bug-out bags” on the walls ensure that everyone here has at least an imagined scenario of what he or she would do in case of a quake. Pat Corcoran is our coastal natural hazards extension agent, and he has lots of info on the subject of “The Big One” and how to prepare.

When the earthquake hit Japan earlier this year, folks on the Oregon coast learned how real this scenario could become. For those of us on the Oregon coast, the local evacuations were a wake-up call. In Japan, the nightmare continues. We imagine great disasters befalling “other people,” but actual disasters tend to remind us that there are no “other people”—only some of “us.” Nobody is immune, and nobody is untouched.

With this unsettling fact in mind, why do we so enjoy the concept of using model waves to smash a miniature coastal town not unlike our own? Back in my own home state of Florida, why do visitors enjoy “Disasterville” at MOSI? Why bring to mind the things that frighten us most? We do so for the same reason we watch horror movies, ride roller coasters or listen to Slayer. That is, as long as we have popcorn to eat, a lap bar to hold us in our seats or a buddy to pull us out of the mosh pit, we can look down upon danger and laugh. We banish the ugly and the frightening to the realm of fiction, if only for a moment. If we learn something useful in the process, all the better.

We are taking measurements for the wave tank as we speak. Getting the mechanics right will be a bit of a challenge within our exhibit space. We went over to The Hinsdale Wave Lab on Thursday to look at their tanks and discuss the physics. Nothing replaces talking to the experts—we learned a lot. The challenge is to create a properly scaled environment with properly scaled waves. It’s just a matter of figuring out how to create a realistic model with a maximum length of 25 feet. Easy as that.

We’re considering some major upgrades to the Magic Planet, which will prepare it for the upcoming work we will do with a larger remote sensing exhibit space. Greater luminosity and a more robust cooling system will be huge enhancements and mitigate some maintenance issues.

The hunt for an ideal eye-tracking system continues, but we are getting close to what we want. Apart from how accurately and elegantly a system tracks eye movement, we must consider how it collects and exports data. Stay tuned in the coming weeks for the conclusion of Katie’s eye-tracking adventure, which she began in the last post.

In the meantime, we now have a Twitter feed: @FreeChoiceLab

 

One of the first things to do with a new grant is to buy equipment, yea! That means a bit of research on the available products, even if you’re looking for something as seemingly specialized as eye trackers. So this is the story of that process as we try to decide what to buy.

I got a head start when I was provided with a whole list of files compiled in Evernote. That meant I had to get up to speed on how to use Evernote, but that’s part of this process – learning to use new tools for collaboration as we go. Speaking of, before we got too far into the process I made sure to set up a Dropbox folder for online folder storage and sharing, and a Google Docs spreadsheet to track the information I got from each manufacturer.

The spreadsheet is pretty bare to start, just company, cost, and an “other features” category, but here again, I got a bit of direction for things to take off. We made a connection with a professor at the University of Oregon who’s been studying these systems and even designing some cool uses with them – creating drawings and computerized music simply with the eyes. I digress, but Dr. Hornof has done some background work compiling documentation on a couple of the commercial systems. He gave us a couple of clues as to specs for commercial systems: they’re often limited by the size of the virtual “head box” and that the software with the systems might be limited in capability – so two more categories for the spreadsheet! Dr. Hornof has also invited us down to his lab at the U of O, so we’ll head down in a couple of weeks and check that out.