About Katie Stofer

Research Assistant Professor, STEM Education and Outreach, University of Florida PhD, Oregon State University Free-Choice Learning Lab

Have we made the right decision? I am still getting queries from companies that I contacted over the course of the process. One company’s main guy contacted me after weeks of no contact. I told him that the guy he’d referred me to had ignored my request to set an appointment to talk to him. The main guy tried to tell me he had referred me to a different person than I told him! Some of these businesses seem very strange and disorganized. We definitely based some of our decisions on how strong the companies seemed, especially based on their web presence and their general conduct.

In any case, another wrinkle may be in store for us. We heard from Anthony Hornof at the University of Oregon that the Oregon University System (of which both our schools are a part) required him to basically decide on the specifications for a system and “put it up on the web for bid” for a month, even though he had already chosen the system and the vendor that best suited his project. Luckily, no one responded to the public bid and he was able to go with what he wanted. Maybe the fact that we’re specifying the system in our matching funds proposal will help us out, but if it doesn’t, we could be in for another delay in purchasing. As it is, once we get the go-ahead for funding, it takes about a month for delivery of the system. So we’re currently looking at March delivery.

The good news is, the staff here are eager to volunteer to test out the equipment when we get it!

Well, we’ve decided. We’re going with SMI systems. They offer both a glasses-based and a relatively portable tabletop system. Their tabletop system can be used on not only traditional computer kiosks on a table but also larger screens mounted on a wall, or even projection screens in a theater. Their glasses offer HD resolution “scene video,” that is, the recording of what the subject is looking at over the course of the trial as their field of vision (likely) changes. We got an online walk-through of their powerful software and could see instantly all the statistical methods we could use. After comparing to the systems we saw in Dr. Hornof’s lab, this was the clear winner for use.

Are they a perfect fit? Well, no. They seem to have a relatively small sales force, and that made scheduling a bit of a headache and resulted in a couple of errors in quotes. Those got resolved, but it makes us wonder a bit about how big their technical and support staff is, should we have issues with set up. That was one of our major concerns with another company with a great-looking product, and, if you recall, is one of my personal concerns with fancy new technology. SMI has been around for 20 years, however, and other signs point to them being well-established. They also don’t offer all the features we would love to have in our software in their base package, so they are a bit more expensive overall. But the other company offering a lot of software features was even more expensive and didn’t sell their own hardware. Their hardware isn’t easy to repair ourselves as are some systems that use more off-the-shelf optics. Oh, and they rely on a physical USB “dongle” for their license for the software. None of these outweighed their advantages in the long run.

Now, we have to let down all the other companies, write the grant application, and cross our fingers that the matching funds come through … which we won’t know until January.

I feel like our search is like a random visual search that keeps getting narrowed down. If it were an eye-tracking study’s heat map, we’d be getting fewer and fewer focus points and longer and longer dwell times …

Visiting the University of Oregon to see Anthony Hornof’s lab with tabletop systems in place was enlightening. It was great to see and try out a couple of systems in person and talk with someone who used both about pros and cons of each system, from the optics to the software and even about technical support and the inevitable what to do if something goes wrong. We noted that the support telephone numbers were mounted on the wall next to a telephone.

I’ve also spent some time seeing a couple of software systems in online demos, one from a company that makes the hardware, too, and one from a company that just re-sells the hardware with their own software. I can’t really get a straight answer about the advantages of one software package over another for the same hardware, so that’s another puzzle to figure out, another compromise to make.

I think we’re zeroing in on what we want at this point, and it looks like, thanks to some matching funds from the university if we share our toys, we’ll be able to purchase both types of systems. We’ll get a fully mobile, glasses-mounted system as well as a more powerful but motion-limited stationary system. However, the motion-limited system will actually be less restricted than some systems that are tethered to a computer monitor. We’ve found a system that will detach from the monitor and allow the user to stand at a relatively fixed distance but look at an image virtually as far away as we like. That system records scene video much like the glasses-mounted systems do, but has better processing capability, basically analysis speed, for the times when we are interested in how people look at things like images or kiosk text or even movies. The bottom line is, though, there are still some advantages of other systems or even third-party software, so we can’t really get our absolutely ideal system in one package (or even from one company with two systems).

Another thing we’re having to think about is the massive amounts of video storage space we’re going to need. The glasses-mounted system can record to a laptop subnotebook at this point, but in the future, a smaller recording device with an SD card. The SD card will pretty much max out at about 40 minutes of recording time, though. So we’ll need some of those, as well as external hard drives and lots of secure backup space for our data. Data sharing will prove an interesting logistical problem as well; previous projects we’ve tried to share video data for have not led us yet to an optimal solution when collaborating researchers are in Corvallis, Newport, and Pennsylvania. Maybe one of the current limitations of the forerunner glasses-based system will prove “helpful” in this regard. The software can currently only be analyzed on the notebook that comes with the system, not on any-old PC, so it will reside most of the time at Newport and those of us who live elsewhere will just have to deal, or take the laptop with us. Hm, guess we ought to get to work setting out a plan for sharing the equipment that outlines not only physical equipment loan procedures but also data storage and analysis plans for when we might have to share these toys.

So the deeper I go, the bigger my spreadsheet gets. I decided today it made sense to split it into four: 1) one with all of the information for each company, basically what I already have, 2) one with just company info, such as email, contact person, and warranty, 3) one with the information for the tabletop or user-seated systems, and 4) one with just the information for the glasses-based systems. For one thing, now I can still read the spreadsheets if I print them out in landscape orientation. However, since I want to keep the data in the single original spreadsheet as well, I am not sure if I’m going to have to fill in two boxes each time I get a new answer at this point or if I can link the data to fill in automatically. I’m pretty sure you can do this with Excel, but so far, not sure about GoogleDocs.

 

I also keep finding new companies to contact – four more just today. At least I feel like I’m getting more of a handle on the technology. Too bad the phone calls always go a little differently and I never remember to get all my questions asked (especially because our cordless phone in the office keeps running out of battery after about 30 minutes, cutting some of my conversations short!). Oh well, that’s what email followup is for. None of the companies seem to specialize in any particular area around eye tracking, and none have reports or papers to point to, other than snippets of testimonials. Their web sites are all very sales-oriented.

 

In other news, I’m a little frustrated with some of the customer service. Some companies have been very slow to respond, and when they do, they don’t actually set an appointment as I requested, but just say “I’ll call you today.” My schedule and workday is such that I run around a lot, and I don’t want to be tethered to the phone. We don’t have voicemail, and these companies are the ones who don’t answer straight off, but ask for a phone number to call you back. Another company tried to tell me that the visitors to the science center wouldn’t want their visit interrupted with helping us out to do research, even though the calibration time on the glasses was less than a minute. I just had to laugh and tell him I was quite familiar with visitor refusals! In fact, I have a whole post on that to write up for the blog from data I collected this summer.

 

The good news is, I think we’ll be able to find a great solution, especially thanks to matching funds from the university if we share the equipment with other groups that want to use it (which will be an interesting experiment in and of itself). Also, surprisingly, there are some solutions for between $5 – $10 K, as opposed to the $25 – 45 K with software that some of the companies have. I’m not entirely sure of the differences, yet, but it’s nice to know you don’t have to have a *huge* grant to get started on something like this.

It seems product research is much like any research; the deeper you go, instead of finding a clear-cut answer, the more questions you come up with. So far, we have tabletop systems, modular systems that go from tabletop to working with fMRI systems (with their gigantic magnets that might humble other camera setups), or head-mounted systems that can accommodate viewers looking close up as well as far away, but no single system that does all three. The fMRI-compatible systems seem to be the most expensive, and that functionality is definitely our least-required.

Eye-tracking on the sphere or with natural exhibit use seem to be the biggest stumbling blocks for the technology so far. The tabletop systems are designed to allow left-to-right head movement, and maybe a bit of front-to-back, but not variable depth, such as when one might be looking at the far wall of the 6-foot-wide wave tank vs. the wall one is leaning on. Plus, they don’t follow the user as she might move around the sphere. The glasses-mounted systems can go with the user to a point, but not all can do so without external tracking points to pre-define an area of interest. One promising (for our purposes) head-mounted system does video the scene the user is looking at as he moves. I haven’t figured out yet whether it would work well on a close-up screen, where we could track a user’s kiosk navigation, for example. Another big open question is just how long will visitors agree to wear these contraptions??

The other questions I am really trying to press on are the warranty and tech support. I have been stuck with fancy-schmancy shiny new technology that has a) relatively little thought behind the actual user experience at least for our applications, b) tech support that is so new that they can hardly figure out what you’re trying to do with the product, or (worst) c) both problems. The good news with the eye tracking systems is that most seem to be provided at this point by companies that have been around for a while and might even have gone through several rounds of increasingly better products. The bad news is, since we may be going out on a limb with some of our intended uses, I might end up where I was before, with people that don’t understand what I’m trying to do. This is the curse of trying to do something different and new with the equipment, rather than just applying it for the same use in another subject, I suppose. However, I have seen updated versions of these products down the road, so I guess my struggles have not been for naught, at least from the users down the line from me. Maybe I’m just destined to be an alpha-tester.

One big help has come, as usual, from the museum community, fellow innovators that you all are. A post to the ASTC list has yielded a couple of leads on how these products are being used, which products were chosen and why, and most importantly, customer satisfaction with the products and the tech support. I’ve myself been called upon to advise on the use of the particular technologies I talked about above, and I definitely gave it to people straight.

One of the first things to do with a new grant is to buy equipment, yea! That means a bit of research on the available products, even if you’re looking for something as seemingly specialized as eye trackers. So this is the story of that process as we try to decide what to buy.

I got a head start when I was provided with a whole list of files compiled in Evernote. That meant I had to get up to speed on how to use Evernote, but that’s part of this process – learning to use new tools for collaboration as we go. Speaking of, before we got too far into the process I made sure to set up a Dropbox folder for online folder storage and sharing, and a Google Docs spreadsheet to track the information I got from each manufacturer.

The spreadsheet is pretty bare to start, just company, cost, and an “other features” category, but here again, I got a bit of direction for things to take off. We made a connection with a professor at the University of Oregon who’s been studying these systems and even designing some cool uses with them – creating drawings and computerized music simply with the eyes. I digress, but Dr. Hornof has done some background work compiling documentation on a couple of the commercial systems. He gave us a couple of clues as to specs for commercial systems: they’re often limited by the size of the virtual “head box” and that the software with the systems might be limited in capability – so two more categories for the spreadsheet! Dr. Hornof has also invited us down to his lab at the U of O, so we’ll head down in a couple of weeks and check that out.