Regarding this story from the Times:

How do we emphasize the importance of rigorous, often tedious work while acknowledging the potential for great achievement? Are we setting our children up to fail with unrealistic expectations, or loading down our college freshmen with theory divorced from its natural context?

“Studies have found that roughly 40 percent of students planning engineering and science majors end up switching to other subjects or failing to get any degree. That increases to as much as 60 percent when pre-medical students, who typically have the strongest SAT scores and high school science preparation, are included, according to new data from the University of California at Los Angeles. That is twice the combined attrition rate of all other majors.”

The design process for our climate change gallery is now underway. In addition to presenting current science, we’re designing the gallery to address the values and cultural beliefs that inform the discourse on this topic. One of the main concepts we’ll be drawing on is the “Six Americas.”

We want the climate change gallery to be as participatory as possible, allowing visitors to provide feedback and personal reflections on the content. Most of our exhibits deal primarily or exclusively with knowledge. This gallery will focus on personal beliefs, and how these influence the ways people learn. It should be an interesting project.

In the spirit of Thanksgiving, here’s a little piece from the Science and Entertainment Exchange about the science of cooking a turkey. I’m thankful for it.

Have a happy Thanksgiving, everyone!

One of the core technologies planned for the FCL Lab is a facial recognition system. Using a network of cameras placed throughout the facility, the facial recognition system will log research participants as they enter the facility and track their progress as they interact with exhibits, educators and family groups.

In conversations about our plans for the lab we find that mention of this technology always brings up strong feelings, curiosity, and the tinge of ‘Big Brother’ fear from people.

A couple of interesting notes on the technology: The system looks at a person’s face and measures the distance between certain key points (cheek bone to cheek bone, nose to mouth, eye to eye, chin to brow, etc.). Most systems record 20-50 such relationships, store them in a database and assign a ‘participant ID’ to that list of numbers. Amazingly enough, those numbers are so unique that the system really never mistakes one person for another.  The system does not record an image of the participant – no photographic or video images are recorded. All of this brings up interesting questions about personal identity; something we are hoping to explore as we move deeper into the project

One of the experimental sub-systems we will be working with analyses the recorded face data and can derive from the change in measurements (corners of mouth, raise of eyebrows) some rudimentary reporting of facial expression. As our research is ongoing, we will be looking at the very subjective nature of facial expressions in a learning environment. Does smiling or frowning always mean happy or sad, learning or not?

This technology has been around for a while – it all started in the mid-’90s with the invention of face detection software that swiftly found its way into digital camera systems. Today we all take for granted the green box around our families’ faces, and the little double beep signaling a perfect lock.

In the late ’90s my company was working with General Electric promoting their revolutionary product, Video IQ. They went beyond just recognizing a face. They would recognize a human form, and continue to recognize that same human as they moved from camera to camera. This was a pivotal surveillance product. It gave autonomous surveillance systems a ‘brain;’ the ability to send an alert to security personnel when an unauthorized human form was recognized. We were blown away. Wonderful and vaguely creepy at the same time.  I never would have imaged that more than 10 years later I’d be working to implement the next generation of technology in a very different environment. Oddly enough, the nature of the discussion hasn’t changed much.

A big job this month will be to identify a software platform for what will become our core facial recognition system that, hopefully, will continue to serve us for the next five years. Our original choice in our proposal to the NSF was PittPatt, a very promising platform that could process a dozen faces at a time at a fraction of a second, had a great API and was mostly affordable. When we were awarded the grant, we went to purchase the license and were dismayed that Google had bought the company and pulled the IP off the market. It was small consolation that we obviously shared good taste and insight with the likes of Google.

There are a dozen companies on the market with products that will serve our needs in the lab. We are slowly working our way down the list, ferreting out the strengths and weaknesses. The process is slightly laborious because of the culture chasm between our needs and the typical needs of their standard clientèle. The target market for this technology has become Homeland Security, the FBI and other law enforcement agencies. Their mindset and the vocabulary are a world apart from our goal of understanding how people learn. We work with participants, they work with ‘target subjects.’ We look at gathering points, they look at ‘choke points.’ Some of the phone calls I’ve made have ended up in some interesting and positive conversations. An upside to the differences in homeland security and university education research is a vast difference in the tiered licenses structure.

We’ll keep reporting on our progress with these systems. We’ll have more on our participant opt-out strategy also; an interesting cross of the face detection technology and the new augmented reality systems.

 

Mark Farley, Project Manager

Mark Farley officially joined the Oregon Sea Grant FCL Lab and Visitor Center team, accepting the position of project and technology development manager for the Lab development process. Mark had been working as part of the exhibit development team here at the visitor center as a contractor to Oregon Sea Grant, and was part of the grant writing team responsible for bringing in the NSF award.

Mark came to us from Pathworks, Inc., where he served as VP and operations manager in the development of interactive media, custom software, and marketing campaigns for public and private clientele.

“The work Oregon Sea Grant is supporting through their Free-Choice Learning Initiative here at the Hatfield Marine Science Center Visitor Center, and the creation of the free-choice learning research lab is some of the most exciting and professionally satisfying work I’ve ever participated in. The real delight is getting to work with such an exceptional team of creative people.

My first task is to get the project milestones anchored, and start working on the technology development plan. No small task considering how many unique technology tools we will be developing for the lab, not to mention the three new exhibits which will serve as focus point for our research. We’ve got some remarkable industry partners, the support of OSU’s Free-Choice Learning program in the College of Education, and OSU’s Office research to ensure we fulfill the vision of creating the first national free-choice learning research facility. Exciting times ahead!”

Exciting indeed.  Welcome aboard Mark!

 

Well, we’ve decided. We’re going with SMI systems. They offer both a glasses-based and a relatively portable tabletop system. Their tabletop system can be used on not only traditional computer kiosks on a table but also larger screens mounted on a wall, or even projection screens in a theater. Their glasses offer HD resolution “scene video,” that is, the recording of what the subject is looking at over the course of the trial as their field of vision (likely) changes. We got an online walk-through of their powerful software and could see instantly all the statistical methods we could use. After comparing to the systems we saw in Dr. Hornof’s lab, this was the clear winner for use.

Are they a perfect fit? Well, no. They seem to have a relatively small sales force, and that made scheduling a bit of a headache and resulted in a couple of errors in quotes. Those got resolved, but it makes us wonder a bit about how big their technical and support staff is, should we have issues with set up. That was one of our major concerns with another company with a great-looking product, and, if you recall, is one of my personal concerns with fancy new technology. SMI has been around for 20 years, however, and other signs point to them being well-established. They also don’t offer all the features we would love to have in our software in their base package, so they are a bit more expensive overall. But the other company offering a lot of software features was even more expensive and didn’t sell their own hardware. Their hardware isn’t easy to repair ourselves as are some systems that use more off-the-shelf optics. Oh, and they rely on a physical USB “dongle” for their license for the software. None of these outweighed their advantages in the long run.

Now, we have to let down all the other companies, write the grant application, and cross our fingers that the matching funds come through … which we won’t know until January.

Science!

If you look carefully at the above photo, you can see Ursula sulking in the background. When I put my hand into the tank to check the new camera’s frame rate and motion blur, she turned a sort of red-on-white paisley—an unfamiliar pattern that I interpreted as a statement of disapproval inexpressible in any vertebrate language.

Our improvised test housing was a wooden box of paper towels from the touch pool, with the camera fixed in place by a wad of towels and cloth diapers. For further structural support, we rested the camera on a jar of formalin-preserved octopus eggs inside the box. The final installation will have a rather more stable and elegant housing. Prototyping is a fantastically organic and immediate process.

We’ve been struggling with this potential replacement Octocam for the past week. This was a neat, compact security camera that strongly resembled HAL from 2001. We took it into the Visitor Center, plugged it in, typed in the IP address, and…

“I’m sorry, Dave. I’m afraid I can’t allow you to do that.”

We got nothing. We tried a different Ethernet cable. We tried using another port. We tried reconfiguring the network. We tried installing new drivers. After several frustrating days of experimentation, I unplugged the AC adapter to see if one more power cycle would end our troubles. Before I could plug the cord back in, Mark stopped me. The network light was blinking! The camera was happily negotiating a connection with the server on Ethernet power alone.

Apparently, the AC adapter was turning off the Ethernet power, disabling the Ethernet connection in the process. Plugging the camera in caused it to not work. Perhaps that most insulting of tech support questions (“Is your device plugged in?”) doesn’t have as obvious a correct answer as it seems.

Once the camera started feeding to the network, we discovered a different problem: the frame rate just wasn’t high enough for our standards. This model would make a fantastic security camera, but it made a so-so Octocam. As much as we dislike prolonging our time without a tank-level Octocam, we can’t justify trading one problem for another.

We’ll have another model in soon, and hopefully this one will give us what we’ve all been waiting for.