Laura brought this to our attention this morning. It’s a UC Davis project to create an augmented-reality sandbox that models topography and water flow with a Kinect system and projector. Be sure to check out the videos.
“The goal of this project was to develop a real-time integrated augmented reality system to physically create topography models which are then scanned into a computer in real time, and used as background for a variety of graphics effects and simulations. The final product is supposed to be self-contained to the point where it can be used as a hands-on exhibit in science museums with little supervision.”
In other words, this is the sandbox you wish you had as a kid. The visitor uses a hand gesture to dump water into the sandbox. That would be the omnipotent open-palm gesture used almost universally by children to signify shooting lightning/fireballs/missiles/flaming lightning missiles from their hands. Personally, it’s one of the first ones I try when confronted with a gesture-recognition system.
An AR sandbox lends itself to stream-table activities, but what else could it do? With a few modifications and a palette swap, it could model volcanoes. Sand castles could become actual castles. Green plastic army men could re-enact historical battles, guided by projected arrows. What else can you think of?