One of the key techniques in museum and free-choice learning evaluation and research is the idea of visitor or user observation by the staff. When we’re trying to observe them in a “natural” state, and figure out how they are really behaving, for example. We call this observation unobtrusive. The reality is we are rarely so discreet. How do you convince regular visitors that that staff member wearing a uniform and scribbling on a clipboard near enough to eavesdrop on you is not actually stalking them? You don’t, that is, until you turn to a lot of technological solutions as our new lab will be doing.

We’ve spent a lot of hours dreaming up the way this new system is going to work and trying to make the observations of speech, actions, and more esoteric things like interests and goals hidden to the visitor’s eye. Whether we succeed or not will be the subject of many new sets of evaluations with our three new exhibits and more. Laura’s looxcie-based research will be one of these.

Over the years we’ve gathered lots of data about what people tend to do when either they truly don’t know they’re being watched, or they don’t care. In addition, we’ve gathered some ideas of how visitors react to the idea of participating in our studies, from flat-out asking us what we’re trying to figure out, to just giving us the hairy eyeball and perhaps skipping the exhibit we’re working on. A lot of these turn into frustrations for the staff as we must throw out that subject, or start our counting for randomization over. So as we go through the design process, I’m going to share some of my observations of myself gathering visitor data through observations and surveys. These two collection tools are ones we hope to readily automate to the benefit of both the visitors who feel uncomfortable under obvious scrutiny and the researcher who suffers the pangs of rejection.

Have we made the right decision? I am still getting queries from companies that I contacted over the course of the process. One company’s main guy contacted me after weeks of no contact. I told him that the guy he’d referred me to had ignored my request to set an appointment to talk to him. The main guy tried to tell me he had referred me to a different person than I told him! Some of these businesses seem very strange and disorganized. We definitely based some of our decisions on how strong the companies seemed, especially based on their web presence and their general conduct.

In any case, another wrinkle may be in store for us. We heard from Anthony Hornof at the University of Oregon that the Oregon University System (of which both our schools are a part) required him to basically decide on the specifications for a system and “put it up on the web for bid” for a month, even though he had already chosen the system and the vendor that best suited his project. Luckily, no one responded to the public bid and he was able to go with what he wanted. Maybe the fact that we’re specifying the system in our matching funds proposal will help us out, but if it doesn’t, we could be in for another delay in purchasing. As it is, once we get the go-ahead for funding, it takes about a month for delivery of the system. So we’re currently looking at March delivery.

The good news is, the staff here are eager to volunteer to test out the equipment when we get it!

Well, we’ve decided. We’re going with SMI systems. They offer both a glasses-based and a relatively portable tabletop system. Their tabletop system can be used on not only traditional computer kiosks on a table but also larger screens mounted on a wall, or even projection screens in a theater. Their glasses offer HD resolution “scene video,” that is, the recording of what the subject is looking at over the course of the trial as their field of vision (likely) changes. We got an online walk-through of their powerful software and could see instantly all the statistical methods we could use. After comparing to the systems we saw in Dr. Hornof’s lab, this was the clear winner for use.

Are they a perfect fit? Well, no. They seem to have a relatively small sales force, and that made scheduling a bit of a headache and resulted in a couple of errors in quotes. Those got resolved, but it makes us wonder a bit about how big their technical and support staff is, should we have issues with set up. That was one of our major concerns with another company with a great-looking product, and, if you recall, is one of my personal concerns with fancy new technology. SMI has been around for 20 years, however, and other signs point to them being well-established. They also don’t offer all the features we would love to have in our software in their base package, so they are a bit more expensive overall. But the other company offering a lot of software features was even more expensive and didn’t sell their own hardware. Their hardware isn’t easy to repair ourselves as are some systems that use more off-the-shelf optics. Oh, and they rely on a physical USB “dongle” for their license for the software. None of these outweighed their advantages in the long run.

Now, we have to let down all the other companies, write the grant application, and cross our fingers that the matching funds come through … which we won’t know until January.

I’ve been looking in to technologies that help observe a free choice learning experience from the learner perspective. My research interests center on interactions between learners and informal educators, and so I wanted a technology that helped record interactions from the learner perspective, but were as least obtrusive on that interaction as possible.

Originally I was interested in using handheld technologies (such as smartphones) for this task. Here the idea was to have the learner wear a handheld on a lanyard which would automatically tag and record their interactions with informal educators via QR codes or augmented reality symbols. However, this proved more complicated than originally thought (and produced somewhat dodgy video recordings!), so we looked for a simpler approach.

I am currently exploring how Bluetooth headsets can help this process. The “looxcie” is basically a Bluetooth headset equipped with a camera, which can be paired to a handheld device for recording or work independently. Harrison is expertly modeling this device in the photos. I am in the process of starting to pilot this technology in the visitor center, and have spent some time with the volunteer interpreters at HMSC demonstrating how this might be used for my research. Maureen and Becca helped me produce a test video at the octopus tank (link below).

 

 

 

I feel like our search is like a random visual search that keeps getting narrowed down. If it were an eye-tracking study’s heat map, we’d be getting fewer and fewer focus points and longer and longer dwell times …

Visiting the University of Oregon to see Anthony Hornof’s lab with tabletop systems in place was enlightening. It was great to see and try out a couple of systems in person and talk with someone who used both about pros and cons of each system, from the optics to the software and even about technical support and the inevitable what to do if something goes wrong. We noted that the support telephone numbers were mounted on the wall next to a telephone.

I’ve also spent some time seeing a couple of software systems in online demos, one from a company that makes the hardware, too, and one from a company that just re-sells the hardware with their own software. I can’t really get a straight answer about the advantages of one software package over another for the same hardware, so that’s another puzzle to figure out, another compromise to make.

I think we’re zeroing in on what we want at this point, and it looks like, thanks to some matching funds from the university if we share our toys, we’ll be able to purchase both types of systems. We’ll get a fully mobile, glasses-mounted system as well as a more powerful but motion-limited stationary system. However, the motion-limited system will actually be less restricted than some systems that are tethered to a computer monitor. We’ve found a system that will detach from the monitor and allow the user to stand at a relatively fixed distance but look at an image virtually as far away as we like. That system records scene video much like the glasses-mounted systems do, but has better processing capability, basically analysis speed, for the times when we are interested in how people look at things like images or kiosk text or even movies. The bottom line is, though, there are still some advantages of other systems or even third-party software, so we can’t really get our absolutely ideal system in one package (or even from one company with two systems).

Another thing we’re having to think about is the massive amounts of video storage space we’re going to need. The glasses-mounted system can record to a laptop subnotebook at this point, but in the future, a smaller recording device with an SD card. The SD card will pretty much max out at about 40 minutes of recording time, though. So we’ll need some of those, as well as external hard drives and lots of secure backup space for our data. Data sharing will prove an interesting logistical problem as well; previous projects we’ve tried to share video data for have not led us yet to an optimal solution when collaborating researchers are in Corvallis, Newport, and Pennsylvania. Maybe one of the current limitations of the forerunner glasses-based system will prove “helpful” in this regard. The software can currently only be analyzed on the notebook that comes with the system, not on any-old PC, so it will reside most of the time at Newport and those of us who live elsewhere will just have to deal, or take the laptop with us. Hm, guess we ought to get to work setting out a plan for sharing the equipment that outlines not only physical equipment loan procedures but also data storage and analysis plans for when we might have to share these toys.

So the deeper I go, the bigger my spreadsheet gets. I decided today it made sense to split it into four: 1) one with all of the information for each company, basically what I already have, 2) one with just company info, such as email, contact person, and warranty, 3) one with the information for the tabletop or user-seated systems, and 4) one with just the information for the glasses-based systems. For one thing, now I can still read the spreadsheets if I print them out in landscape orientation. However, since I want to keep the data in the single original spreadsheet as well, I am not sure if I’m going to have to fill in two boxes each time I get a new answer at this point or if I can link the data to fill in automatically. I’m pretty sure you can do this with Excel, but so far, not sure about GoogleDocs.

 

I also keep finding new companies to contact – four more just today. At least I feel like I’m getting more of a handle on the technology. Too bad the phone calls always go a little differently and I never remember to get all my questions asked (especially because our cordless phone in the office keeps running out of battery after about 30 minutes, cutting some of my conversations short!). Oh well, that’s what email followup is for. None of the companies seem to specialize in any particular area around eye tracking, and none have reports or papers to point to, other than snippets of testimonials. Their web sites are all very sales-oriented.

 

In other news, I’m a little frustrated with some of the customer service. Some companies have been very slow to respond, and when they do, they don’t actually set an appointment as I requested, but just say “I’ll call you today.” My schedule and workday is such that I run around a lot, and I don’t want to be tethered to the phone. We don’t have voicemail, and these companies are the ones who don’t answer straight off, but ask for a phone number to call you back. Another company tried to tell me that the visitors to the science center wouldn’t want their visit interrupted with helping us out to do research, even though the calibration time on the glasses was less than a minute. I just had to laugh and tell him I was quite familiar with visitor refusals! In fact, I have a whole post on that to write up for the blog from data I collected this summer.

 

The good news is, I think we’ll be able to find a great solution, especially thanks to matching funds from the university if we share the equipment with other groups that want to use it (which will be an interesting experiment in and of itself). Also, surprisingly, there are some solutions for between $5 – $10 K, as opposed to the $25 – 45 K with software that some of the companies have. I’m not entirely sure of the differences, yet, but it’s nice to know you don’t have to have a *huge* grant to get started on something like this.