I’ve been looking in to technologies that help observe a free choice learning experience from the learner perspective. My research interests center on interactions between learners and informal educators, and so I wanted a technology that helped record interactions from the learner perspective, but were as least obtrusive on that interaction as possible.

Originally I was interested in using handheld technologies (such as smartphones) for this task. Here the idea was to have the learner wear a handheld on a lanyard which would automatically tag and record their interactions with informal educators via QR codes or augmented reality symbols. However, this proved more complicated than originally thought (and produced somewhat dodgy video recordings!), so we looked for a simpler approach.

I am currently exploring how Bluetooth headsets can help this process. The “looxcie” is basically a Bluetooth headset equipped with a camera, which can be paired to a handheld device for recording or work independently. Harrison is expertly modeling this device in the photos. I am in the process of starting to pilot this technology in the visitor center, and have spent some time with the volunteer interpreters at HMSC demonstrating how this might be used for my research. Maureen and Becca helped me produce a test video at the octopus tank (link below).

 

 

 

I feel like our search is like a random visual search that keeps getting narrowed down. If it were an eye-tracking study’s heat map, we’d be getting fewer and fewer focus points and longer and longer dwell times …

Visiting the University of Oregon to see Anthony Hornof’s lab with tabletop systems in place was enlightening. It was great to see and try out a couple of systems in person and talk with someone who used both about pros and cons of each system, from the optics to the software and even about technical support and the inevitable what to do if something goes wrong. We noted that the support telephone numbers were mounted on the wall next to a telephone.

I’ve also spent some time seeing a couple of software systems in online demos, one from a company that makes the hardware, too, and one from a company that just re-sells the hardware with their own software. I can’t really get a straight answer about the advantages of one software package over another for the same hardware, so that’s another puzzle to figure out, another compromise to make.

I think we’re zeroing in on what we want at this point, and it looks like, thanks to some matching funds from the university if we share our toys, we’ll be able to purchase both types of systems. We’ll get a fully mobile, glasses-mounted system as well as a more powerful but motion-limited stationary system. However, the motion-limited system will actually be less restricted than some systems that are tethered to a computer monitor. We’ve found a system that will detach from the monitor and allow the user to stand at a relatively fixed distance but look at an image virtually as far away as we like. That system records scene video much like the glasses-mounted systems do, but has better processing capability, basically analysis speed, for the times when we are interested in how people look at things like images or kiosk text or even movies. The bottom line is, though, there are still some advantages of other systems or even third-party software, so we can’t really get our absolutely ideal system in one package (or even from one company with two systems).

Another thing we’re having to think about is the massive amounts of video storage space we’re going to need. The glasses-mounted system can record to a laptop subnotebook at this point, but in the future, a smaller recording device with an SD card. The SD card will pretty much max out at about 40 minutes of recording time, though. So we’ll need some of those, as well as external hard drives and lots of secure backup space for our data. Data sharing will prove an interesting logistical problem as well; previous projects we’ve tried to share video data for have not led us yet to an optimal solution when collaborating researchers are in Corvallis, Newport, and Pennsylvania. Maybe one of the current limitations of the forerunner glasses-based system will prove “helpful” in this regard. The software can currently only be analyzed on the notebook that comes with the system, not on any-old PC, so it will reside most of the time at Newport and those of us who live elsewhere will just have to deal, or take the laptop with us. Hm, guess we ought to get to work setting out a plan for sharing the equipment that outlines not only physical equipment loan procedures but also data storage and analysis plans for when we might have to share these toys.

Did you catch OSU’s Lynn Dierking on Science Friday today? If not, here’s the link.

What is the natural relationship between leisure and learning? Does a quantifiable difference exist at all? I find that my leisure activities always entail some kind of learning, and I think that’s the norm. The things we do for fun involve seeking experiences, furthering interests and relationships, developing skills and solving problems. Even when we sleep, our brains process information.

As a child—years before I moved to Oregon—I often played the much-beloved Oregon Trail computer game (primarily version 1.2). There was no separation between the “fun” parts of the game and the “educational” parts. The educational content formed the mechanics and narrative of the game, and it was great. I wasn’t “having fun and learning” (a perennial edutainment cliché) because even that phrase implies a natural distinction between the two.

The drive to learn is inherent in human development. Even when a child moans about his homework, it isn’t truly the knowledge he resists—though he himself may think it the case—but the context (or lack of it).

It isn’t enough to make learning fun. At the FCL Lab, and in OSU’s broader FCL Science and Math Education programs, we strive to remind our audiences and ourselves how much fun learning already is.

So the deeper I go, the bigger my spreadsheet gets. I decided today it made sense to split it into four: 1) one with all of the information for each company, basically what I already have, 2) one with just company info, such as email, contact person, and warranty, 3) one with the information for the tabletop or user-seated systems, and 4) one with just the information for the glasses-based systems. For one thing, now I can still read the spreadsheets if I print them out in landscape orientation. However, since I want to keep the data in the single original spreadsheet as well, I am not sure if I’m going to have to fill in two boxes each time I get a new answer at this point or if I can link the data to fill in automatically. I’m pretty sure you can do this with Excel, but so far, not sure about GoogleDocs.

 

I also keep finding new companies to contact – four more just today. At least I feel like I’m getting more of a handle on the technology. Too bad the phone calls always go a little differently and I never remember to get all my questions asked (especially because our cordless phone in the office keeps running out of battery after about 30 minutes, cutting some of my conversations short!). Oh well, that’s what email followup is for. None of the companies seem to specialize in any particular area around eye tracking, and none have reports or papers to point to, other than snippets of testimonials. Their web sites are all very sales-oriented.

 

In other news, I’m a little frustrated with some of the customer service. Some companies have been very slow to respond, and when they do, they don’t actually set an appointment as I requested, but just say “I’ll call you today.” My schedule and workday is such that I run around a lot, and I don’t want to be tethered to the phone. We don’t have voicemail, and these companies are the ones who don’t answer straight off, but ask for a phone number to call you back. Another company tried to tell me that the visitors to the science center wouldn’t want their visit interrupted with helping us out to do research, even though the calibration time on the glasses was less than a minute. I just had to laugh and tell him I was quite familiar with visitor refusals! In fact, I have a whole post on that to write up for the blog from data I collected this summer.

 

The good news is, I think we’ll be able to find a great solution, especially thanks to matching funds from the university if we share the equipment with other groups that want to use it (which will be an interesting experiment in and of itself). Also, surprisingly, there are some solutions for between $5 – $10 K, as opposed to the $25 – 45 K with software that some of the companies have. I’m not entirely sure of the differences, yet, but it’s nice to know you don’t have to have a *huge* grant to get started on something like this.

The inflatable basking shark exhibits atypical feeding behavior.

Today was Homeschool Day in the Visitor Center.  This event gives our education staff an opportunity to work with children and families from a wide variety of learning backgrounds.  It’s also a lot of fun.  This time around, visitors were greeted by a life-size, inflatable basking shark.  As busy as it was, this Homeschool Day went smoother than the last, which was interrupted by a tsunami evacuation.

The new and improved Octocam is almost here!  We’ve been struggling with our underwater octopus webcam for some time, mostly due to the effects of seawater exposure.  We’re going ahead with our plan to install a camera outside the tank, and we’ve already ordered the camera.  That should mean just a couple of weeks until the Octocam is better than ever.

When the previous Octocam was in place, Ursula liked to sleep nestled between the tank wall and the back of the camera.  She held the flexible hose containing the camera’s network and power cables against her forehead like a teddy bear—sometimes pulling the camera slightly out of position in the process.  This was great for visitors, but not so great for our viewers at home.  The new camera will have a pan-tilt-zoom function, so we should be able to see Ursula in some out-of-the-way places.  Stay tuned!