I feel like our search is like a random visual search that keeps getting narrowed down. If it were an eye-tracking study’s heat map, we’d be getting fewer and fewer focus points and longer and longer dwell times …

Visiting the University of Oregon to see Anthony Hornof’s lab with tabletop systems in place was enlightening. It was great to see and try out a couple of systems in person and talk with someone who used both about pros and cons of each system, from the optics to the software and even about technical support and the inevitable what to do if something goes wrong. We noted that the support telephone numbers were mounted on the wall next to a telephone.

I’ve also spent some time seeing a couple of software systems in online demos, one from a company that makes the hardware, too, and one from a company that just re-sells the hardware with their own software. I can’t really get a straight answer about the advantages of one software package over another for the same hardware, so that’s another puzzle to figure out, another compromise to make.

I think we’re zeroing in on what we want at this point, and it looks like, thanks to some matching funds from the university if we share our toys, we’ll be able to purchase both types of systems. We’ll get a fully mobile, glasses-mounted system as well as a more powerful but motion-limited stationary system. However, the motion-limited system will actually be less restricted than some systems that are tethered to a computer monitor. We’ve found a system that will detach from the monitor and allow the user to stand at a relatively fixed distance but look at an image virtually as far away as we like. That system records scene video much like the glasses-mounted systems do, but has better processing capability, basically analysis speed, for the times when we are interested in how people look at things like images or kiosk text or even movies. The bottom line is, though, there are still some advantages of other systems or even third-party software, so we can’t really get our absolutely ideal system in one package (or even from one company with two systems).

Another thing we’re having to think about is the massive amounts of video storage space we’re going to need. The glasses-mounted system can record to a laptop subnotebook at this point, but in the future, a smaller recording device with an SD card. The SD card will pretty much max out at about 40 minutes of recording time, though. So we’ll need some of those, as well as external hard drives and lots of secure backup space for our data. Data sharing will prove an interesting logistical problem as well; previous projects we’ve tried to share video data for have not led us yet to an optimal solution when collaborating researchers are in Corvallis, Newport, and Pennsylvania. Maybe one of the current limitations of the forerunner glasses-based system will prove “helpful” in this regard. The software can currently only be analyzed on the notebook that comes with the system, not on any-old PC, so it will reside most of the time at Newport and those of us who live elsewhere will just have to deal, or take the laptop with us. Hm, guess we ought to get to work setting out a plan for sharing the equipment that outlines not only physical equipment loan procedures but also data storage and analysis plans for when we might have to share these toys.

So the deeper I go, the bigger my spreadsheet gets. I decided today it made sense to split it into four: 1) one with all of the information for each company, basically what I already have, 2) one with just company info, such as email, contact person, and warranty, 3) one with the information for the tabletop or user-seated systems, and 4) one with just the information for the glasses-based systems. For one thing, now I can still read the spreadsheets if I print them out in landscape orientation. However, since I want to keep the data in the single original spreadsheet as well, I am not sure if I’m going to have to fill in two boxes each time I get a new answer at this point or if I can link the data to fill in automatically. I’m pretty sure you can do this with Excel, but so far, not sure about GoogleDocs.

 

I also keep finding new companies to contact – four more just today. At least I feel like I’m getting more of a handle on the technology. Too bad the phone calls always go a little differently and I never remember to get all my questions asked (especially because our cordless phone in the office keeps running out of battery after about 30 minutes, cutting some of my conversations short!). Oh well, that’s what email followup is for. None of the companies seem to specialize in any particular area around eye tracking, and none have reports or papers to point to, other than snippets of testimonials. Their web sites are all very sales-oriented.

 

In other news, I’m a little frustrated with some of the customer service. Some companies have been very slow to respond, and when they do, they don’t actually set an appointment as I requested, but just say “I’ll call you today.” My schedule and workday is such that I run around a lot, and I don’t want to be tethered to the phone. We don’t have voicemail, and these companies are the ones who don’t answer straight off, but ask for a phone number to call you back. Another company tried to tell me that the visitors to the science center wouldn’t want their visit interrupted with helping us out to do research, even though the calibration time on the glasses was less than a minute. I just had to laugh and tell him I was quite familiar with visitor refusals! In fact, I have a whole post on that to write up for the blog from data I collected this summer.

 

The good news is, I think we’ll be able to find a great solution, especially thanks to matching funds from the university if we share the equipment with other groups that want to use it (which will be an interesting experiment in and of itself). Also, surprisingly, there are some solutions for between $5 – $10 K, as opposed to the $25 – 45 K with software that some of the companies have. I’m not entirely sure of the differences, yet, but it’s nice to know you don’t have to have a *huge* grant to get started on something like this.

It seems product research is much like any research; the deeper you go, instead of finding a clear-cut answer, the more questions you come up with. So far, we have tabletop systems, modular systems that go from tabletop to working with fMRI systems (with their gigantic magnets that might humble other camera setups), or head-mounted systems that can accommodate viewers looking close up as well as far away, but no single system that does all three. The fMRI-compatible systems seem to be the most expensive, and that functionality is definitely our least-required.

Eye-tracking on the sphere or with natural exhibit use seem to be the biggest stumbling blocks for the technology so far. The tabletop systems are designed to allow left-to-right head movement, and maybe a bit of front-to-back, but not variable depth, such as when one might be looking at the far wall of the 6-foot-wide wave tank vs. the wall one is leaning on. Plus, they don’t follow the user as she might move around the sphere. The glasses-mounted systems can go with the user to a point, but not all can do so without external tracking points to pre-define an area of interest. One promising (for our purposes) head-mounted system does video the scene the user is looking at as he moves. I haven’t figured out yet whether it would work well on a close-up screen, where we could track a user’s kiosk navigation, for example. Another big open question is just how long will visitors agree to wear these contraptions??

The other questions I am really trying to press on are the warranty and tech support. I have been stuck with fancy-schmancy shiny new technology that has a) relatively little thought behind the actual user experience at least for our applications, b) tech support that is so new that they can hardly figure out what you’re trying to do with the product, or (worst) c) both problems. The good news with the eye tracking systems is that most seem to be provided at this point by companies that have been around for a while and might even have gone through several rounds of increasingly better products. The bad news is, since we may be going out on a limb with some of our intended uses, I might end up where I was before, with people that don’t understand what I’m trying to do. This is the curse of trying to do something different and new with the equipment, rather than just applying it for the same use in another subject, I suppose. However, I have seen updated versions of these products down the road, so I guess my struggles have not been for naught, at least from the users down the line from me. Maybe I’m just destined to be an alpha-tester.

One big help has come, as usual, from the museum community, fellow innovators that you all are. A post to the ASTC list has yielded a couple of leads on how these products are being used, which products were chosen and why, and most importantly, customer satisfaction with the products and the tech support. I’ve myself been called upon to advise on the use of the particular technologies I talked about above, and I definitely gave it to people straight.

We are taking measurements for the wave tank as we speak. Getting the mechanics right will be a bit of a challenge within our exhibit space. We went over to The Hinsdale Wave Lab on Thursday to look at their tanks and discuss the physics. Nothing replaces talking to the experts—we learned a lot. The challenge is to create a properly scaled environment with properly scaled waves. It’s just a matter of figuring out how to create a realistic model with a maximum length of 25 feet. Easy as that.

We’re considering some major upgrades to the Magic Planet, which will prepare it for the upcoming work we will do with a larger remote sensing exhibit space. Greater luminosity and a more robust cooling system will be huge enhancements and mitigate some maintenance issues.

The hunt for an ideal eye-tracking system continues, but we are getting close to what we want. Apart from how accurately and elegantly a system tracks eye movement, we must consider how it collects and exports data. Stay tuned in the coming weeks for the conclusion of Katie’s eye-tracking adventure, which she began in the last post.

In the meantime, we now have a Twitter feed: @FreeChoiceLab

 

One of the first things to do with a new grant is to buy equipment, yea! That means a bit of research on the available products, even if you’re looking for something as seemingly specialized as eye trackers. So this is the story of that process as we try to decide what to buy.

I got a head start when I was provided with a whole list of files compiled in Evernote. That meant I had to get up to speed on how to use Evernote, but that’s part of this process – learning to use new tools for collaboration as we go. Speaking of, before we got too far into the process I made sure to set up a Dropbox folder for online folder storage and sharing, and a Google Docs spreadsheet to track the information I got from each manufacturer.

The spreadsheet is pretty bare to start, just company, cost, and an “other features” category, but here again, I got a bit of direction for things to take off. We made a connection with a professor at the University of Oregon who’s been studying these systems and even designing some cool uses with them – creating drawings and computerized music simply with the eyes. I digress, but Dr. Hornof has done some background work compiling documentation on a couple of the commercial systems. He gave us a couple of clues as to specs for commercial systems: they’re often limited by the size of the virtual “head box” and that the software with the systems might be limited in capability – so two more categories for the spreadsheet! Dr. Hornof has also invited us down to his lab at the U of O, so we’ll head down in a couple of weeks and check that out.

Thanks to a very generous Informal Science Education grant from the National Science Foundation, the Free-Choice Learning Laboratory will soon be experimenting with some very promising emergent technologies. These technologies—soon to be integrated into our research space here at Oregon State University’s Hatfield Marine Science Center Visitor Center (HMSCVC)—include facial recognition, eye-tracking and augmented reality systems. RFID cards will allow visitors to opt out of these measures. We’re also looking to collaborate with outside researchers through our visiting scholars program.

To make use of these potent data collection tools, we will establish three new exhibits as research platforms:

1. Interactive climate change exhibit: This exhibit will ask visitors to share their own experiences and knowledge. The data collected by the exhibit can then be used to study cultural cognition and the underlying values of visitors.

2. Wave tank and engineering challenge exhibit: The hands-on, interactive wave tank will let visitors explore wave energy, marine structural engineering, and tsunami education. This platform allows for the study of hands-on STEM activities, as well as social dynamics of learning.

3. Remote sensing data visualization: The “Magic Planet” spherical display serves as the centerpiece of our remote sensing hall. We will redesign the 500-square-foot gallery space around the Magic Planet to update exhibit design and content, and to incorporate our new evaluation tools. This research platform allow for the study of complex visualizations, decoding meaning, and personal data narratives, including having visitors collect, analyze and visualize their own remotely sensed data.

A lot of preparation is underway, specifically around building the wave tank exhibit. We are also starting to explore a number of tools that will be used in the lab. Laura Dover has been exploring the potential ‘subject eye view’ of a head-mounted Looxcie camcorder—”the Borg camera,” as we have come to know it. We’ll post more about this as Laura’s work progresses, but she has already “assimilated” some volunteers, whom she put to work trying out the camera. The results are promising.

On a related note, the new OctoCam went online this week after our last camera succumbed to a year in seawater. The streaming underwater Octocam gets an overage of 12,000 viewers a day from all over the world. Ursula, our resident E. dofleini, responded in her usual manner by stuffing it into her mouth and trying to destroy it. She has not succeeded. A large octopus—by nature immensely strong and irrepressibly curious—is a good durability test for submersible equipment.

We’re also refurbishing the Magic Planet, our 3-foot spherical projection system capable of presenting global data realistically on an animated globe. The original projector has long since ceased functioning. Our tech team is installing a new projection system as well as redesigning the mounting and image centering systems. It’s quite a task! We are looking forward to installing Michael Starobin’s new movie “Loop” for our winter visitors.

In general we are evaluating our evaluation tools, drawing up plans and falling into a productive rhythm. We look forward to your feedback in the days and months to come.