This past week has confirmed for me that video coding is an arduous task! Right now I’m continuing to code my video data for my dissertation , and working on my criteria for analysis that will allow me to reduce the data and finish answering my research questions. I’m basically looking at the different modes of how docents interact with visitors (speech, gesture, etc) and suggesting patterns in which they interpret science to the public. I’m cross referencing the themes that emerge from this video analysis with my interview data to come up with some overarching outcomes.
So far the themes seem fairly clear, which is a nice feeling. Plus there seems to be a lot of cross over between the patterns in docent interpretation strategies, and what the literature deems effective interpretation. What is interesting is this group of docents have little to no formal interpretive training. So perhaps good communicative practice emerges on its own when you have constant contact with your audience. Food for thought for professional development activities with informal educators…
What’s interesting about this process is how well I know my data, but how tough it is to get it down on paper. I can talk until I am blue in the face about what my outcomes are coming out as, but it’s like translating an ancient text to get it written up in to structured chapters. Ah, the right of passage that is the final dissertation.
All this video coding has also got me thinking about our development of an automated video analysis process for the lab though. What kind of parameters do we set to have it process the vast landscape of data our camera system can collect, and therefore help reduce the data from the word go? As a researcher, imagining a data set that is already partially reduced puts a smile on my face.
So back to coding. I see coded people….