After long months of planning, and designing, the wave tanks have arrived! The deep water and shore tanks are both equipped with manual wave makers – allowing visitors the opportunity to get a feel for how stroke and frequency immediately affect the shape of a wave.

Waves in the large, dual flume tank will be controlled by a computer kiosks that drive two very powerful motors, able to create any precise wave form, multiple sine forms, and a very impressive tsunami wave. Graduate students and faculty have been scrambling all week to prepare activities for each tank. Some of the activities will include:

  • Shore tank – revetment and erosion strategies. Lincoln logs, gravel, plants, and Lego sea walls.
  • Deep water tank – understanding wave physics. ping pong balls, neutral buoyant rubber ducks, food dyes, and wave energy buoys
  • Flume tank – tsunami resistant structures. Lego and Lincoln log building challenges .

We are setting up everything just in time for the summer rush. We will have 3 interns manning the tank areas and working all the activities with the public – working out what is successful and what needs modification. This whole tank area is one of the largest prototypes the Visitor Center has every deployed. There are lots of questions to answer, and design modifications in the fall.

Peeling the protective paper off the plexi revealed transparent layers of sparkling, clear wonder – We owe a special thanks to James Steele of Envision Acrylics for some beautiful craftsmanship.

While we’ve been working on the tanks our visitors have been very curios, and are full of questions. We opened the deep water tank for use yesterday and watched with a mix of delight and horror the variety of wave making strategies 10 year old boys chose to employ.


One of the core technologies planned for the FCL Lab is a facial recognition system. Using a network of cameras placed throughout the facility, the facial recognition system will log research participants as they enter the facility and track their progress as they interact with exhibits, educators and family groups.

In conversations about our plans for the lab we find that mention of this technology always brings up strong feelings, curiosity, and the tinge of ‘Big Brother’ fear from people.

A couple of interesting notes on the technology: The system looks at a person’s face and measures the distance between certain key points (cheek bone to cheek bone, nose to mouth, eye to eye, chin to brow, etc.). Most systems record 20-50 such relationships, store them in a database and assign a ‘participant ID’ to that list of numbers. Amazingly enough, those numbers are so unique that the system really never mistakes one person for another.  The system does not record an image of the participant – no photographic or video images are recorded. All of this brings up interesting questions about personal identity; something we are hoping to explore as we move deeper into the project

One of the experimental sub-systems we will be working with analyses the recorded face data and can derive from the change in measurements (corners of mouth, raise of eyebrows) some rudimentary reporting of facial expression. As our research is ongoing, we will be looking at the very subjective nature of facial expressions in a learning environment. Does smiling or frowning always mean happy or sad, learning or not?

This technology has been around for a while – it all started in the mid-’90s with the invention of face detection software that swiftly found its way into digital camera systems. Today we all take for granted the green box around our families’ faces, and the little double beep signaling a perfect lock.

In the late ’90s my company was working with General Electric promoting their revolutionary product, Video IQ. They went beyond just recognizing a face. They would recognize a human form, and continue to recognize that same human as they moved from camera to camera. This was a pivotal surveillance product. It gave autonomous surveillance systems a ‘brain;’ the ability to send an alert to security personnel when an unauthorized human form was recognized. We were blown away. Wonderful and vaguely creepy at the same time.  I never would have imaged that more than 10 years later I’d be working to implement the next generation of technology in a very different environment. Oddly enough, the nature of the discussion hasn’t changed much.

A big job this month will be to identify a software platform for what will become our core facial recognition system that, hopefully, will continue to serve us for the next five years. Our original choice in our proposal to the NSF was PittPatt, a very promising platform that could process a dozen faces at a time at a fraction of a second, had a great API and was mostly affordable. When we were awarded the grant, we went to purchase the license and were dismayed that Google had bought the company and pulled the IP off the market. It was small consolation that we obviously shared good taste and insight with the likes of Google.

There are a dozen companies on the market with products that will serve our needs in the lab. We are slowly working our way down the list, ferreting out the strengths and weaknesses. The process is slightly laborious because of the culture chasm between our needs and the typical needs of their standard clientèle. The target market for this technology has become Homeland Security, the FBI and other law enforcement agencies. Their mindset and the vocabulary are a world apart from our goal of understanding how people learn. We work with participants, they work with ‘target subjects.’ We look at gathering points, they look at ‘choke points.’ Some of the phone calls I’ve made have ended up in some interesting and positive conversations. An upside to the differences in homeland security and university education research is a vast difference in the tiered licenses structure.

We’ll keep reporting on our progress with these systems. We’ll have more on our participant opt-out strategy also; an interesting cross of the face detection technology and the new augmented reality systems.