How do we analyze and study something familiar and taken for granted?  How do we take account of the myriad modes of communication and media that are part of practically everything that we do, including learning? One of the biggest challenges we face studying learning (especially in a museum) is documenting meaningful aspects of what people say and do while also taking into account the multiple, nested contexts that help make sense of what we have documented.  As a growing number of researchers and theorists worldwide have begun to document, understanding how these multiple modes of communication and representation work to DO everyday (and not so everyday) activities, requires a multimodal approach that often sees any given social interaction as a nexus (a meeting point) of multiple symbol systems and contexts, some of which are more active and salient (foregrounded) at any given moment by participants or by researchers.

This requires researchers to have ways of capturing and making sense of how people use language, gesture, body position, posture and objects as part of communicating with one another – and for learning researchers it means understanding how all of these ways of communicating contribute to or get in the way of thinking and learning. One of the most compelling ways of approaching these problems is through what has come to be called a multimodal discourse analysis (MMDA).

MMDA gives us tools and techniques for looking at human interactions that take into account how these multiple modes of communication are employed and deployed in everyday activities.  It also supports our tackling the issue of how context drives meaning of talk and actions and how talk and actions can invoke and change contexts.  It does this by acknowledging that the meanings of what people say and do are not prima facie evident, but require the researcher to identify and understand salient contexts within which a particular gesture, phrase, or facial expression makes sense.  We are all fairly fluent and deploying and decoding these cues of communication, and researchers often get quite good at reading them from the outside. But how does one teach an exhibit to read them accurately? Which ones need to be recognized and recorded in the database that drives an exhibit or feeds into a researchers queries?

Over the next several months, we’ll be working out answers to these questions and others that will undoubtedly arise as we get going on data collection and analysis.  We are fortunate to have some outstanding help in this regard.  Dr. Sigrid Norris, Director of the Multimodal Research Centre at the Auckland University of Technology and Editor of the journal Multimodal Communication, is serving as an advisor for the project.  We’re also planning to attend the 6th International Conference on Multimodality this August in London to share what we are up to and learn from leaders in MMDA from around the world.

 

Leave a reply