Here’s a great piece by Nina Simon regarding adult participation in interactive experiences.  We carry certain cultural assumptions about what adults do in a museum or science center.  These assumptions influence our behavior even when they don’t reflect our motivations and interests.

“The common museum knowledge on this issue is that adults are timid, that we have lost some of the wonder, impulsiveness, and active creativity of childhood days. But I don’t think that theory holds up. Major research studies by the NEA and others demonstrate that adults well into their 60s are highly motivated to participate actively with cultural experiences. They’re playing instruments, painting pictures, and cooking gourmet meals in record numbers. They’re going to trivia night. They’re playing video games. It’s possible–likely even–that today’s adults are more motivated by interactive experiences than generations past.”

On a somewhat-related note, we spent some time in the Visitor Center this morning to work on the placement of the wave tanks.  Large sheets of butcher paper stood in for the tanks, and I borrowed one of our wheelchairs to get an initial feel for the accessibility of the layout.  We should have more pictures of this process soon.


The world still doesn’t have flying cars or human teleporters, so it’s sometimes hard to remember that we are, in fact, living in the future.  Here’s a reminder:

Innovega is teaming up with DARPA to develop augmented-reality contact lenses for military use, according to this article by Charles Q. Choi at Scientific American.  Innovega plans to release the lenses commercially as early as 2014.

“The new system consists of advanced contact lenses working in conjunction with lightweight eyewear. Normally, the human eye is limited in its ability to focus on objects placed very near it. The contact lenses contain optics that focus images displayed on the eyewear onto the light-sensing retina in the back of the eye, allowing the wearer to see them properly.” [Link in original]

The article quotes Innovega CEO Steve Willey, who claims the resulting display size would be “equivalent to a 240-inch television, viewed at a distance of 10 feet.”

Innovega’s own overview is available here.  The system does requires glasses, onto which the actual image is projected.  The contact lens is there to focus the image from the lens of the glasses so the user’s eye can comfortably focus on the projected image and the outside world simultaneously.  The technology also lends itself to 3D, as it involves a separate image for each eye.

Are we ready to change the way we see the world?  Is this something you see being valuable, either for general use or for specific applications?  Do you see yourself wearing a system like this in the near future?

 


 


How do we analyze and study something familiar and taken for granted?  How do we take account of the myriad modes of communication and media that are part of practically everything that we do, including learning? One of the biggest challenges we face studying learning (especially in a museum) is documenting meaningful aspects of what people say and do while also taking into account the multiple, nested contexts that help make sense of what we have documented.  As a growing number of researchers and theorists worldwide have begun to document, understanding how these multiple modes of communication and representation work to DO everyday (and not so everyday) activities, requires a multimodal approach that often sees any given social interaction as a nexus (a meeting point) of multiple symbol systems and contexts, some of which are more active and salient (foregrounded) at any given moment by participants or by researchers.

This requires researchers to have ways of capturing and making sense of how people use language, gesture, body position, posture and objects as part of communicating with one another – and for learning researchers it means understanding how all of these ways of communicating contribute to or get in the way of thinking and learning. One of the most compelling ways of approaching these problems is through what has come to be called a multimodal discourse analysis (MMDA).

MMDA gives us tools and techniques for looking at human interactions that take into account how these multiple modes of communication are employed and deployed in everyday activities.  It also supports our tackling the issue of how context drives meaning of talk and actions and how talk and actions can invoke and change contexts.  It does this by acknowledging that the meanings of what people say and do are not prima facie evident, but require the researcher to identify and understand salient contexts within which a particular gesture, phrase, or facial expression makes sense.  We are all fairly fluent and deploying and decoding these cues of communication, and researchers often get quite good at reading them from the outside. But how does one teach an exhibit to read them accurately? Which ones need to be recognized and recorded in the database that drives an exhibit or feeds into a researchers queries?

Over the next several months, we’ll be working out answers to these questions and others that will undoubtedly arise as we get going on data collection and analysis.  We are fortunate to have some outstanding help in this regard.  Dr. Sigrid Norris, Director of the Multimodal Research Centre at the Auckland University of Technology and Editor of the journal Multimodal Communication, is serving as an advisor for the project.  We’re also planning to attend the 6th International Conference on Multimodality this August in London to share what we are up to and learn from leaders in MMDA from around the world.

 

Oregon State University’s beloved research vessel, the R/V Wecoma, will be retiring at the end of March.  Her replacement, the R/V Oceanus, is already on its way to Newport from Woods Hole.  As of this posting, the Oceanus is South of Jamaica on her way to the Panama Canal.  You can follow her progress via webcam and location map here.  The image refreshes every 10 minutes.

While you’re on the webcam site, you might want to check out the O.H. Hinsdale Wave Research Laboratory webcam.  This webcam, refreshed every minute, provides a glimpse into the kind of research we’ll be sharing with the public through our Visitor Center wave tank exhibit.

Some of OSU’s other interesting webcams include the Java II coffee kiosk cam (pixelated for privacy), the H.J. Andrews Experimental Forest webcam, the Marys Peak Observatory webcam and, of course, the Hatfield Marine Science Center outdoor webcam.

 

 

Ever since I made the switch to Linux, I’ve been rabidly enthusiastic about all things open-source.  I know how I feel if given the choice between Firefox and [insert name of other browser here], but this concept would seem to require much deeper consideration.

“The system is hidebound, expensive and elitist, they say. Peer review can take months, journal subscriptions can be prohibitively costly, and a handful of gatekeepers limit the flow of information. It is an ideal system for sharing knowledge, said the quantum physicist Michael Nielsen, only ‘if you’re stuck with 17th-century technology.'”

What do you think the future holds for “open science?”  If an active “open science” community makes a thorough effort to ensure methodologically sound and reproducible research, how might the results be different from existing publication standards?

In the coming months, we’ll start discussing learning theory in the blog a lot more than we have. With more projects getting off the ground, we’ll need to talk about their conceptual models and theoretical underpinnings.

Some of the terminology we’ll be using may be unfamiliar to many readers. To help with this issue, we’re working on a “Jargon Board” to define potentially confusing terms as they come up. Suggestions are always encouraged.