I have been absent from blog posting as of late due to the whirlwind of grad school, but that also means there is quite a bit to share related to work in the lab and research!  My last post described the experience at ASTC in North Carolina – a great opportunity to see work at other science and technology centers, and to meet professionals in the field that are doing incredible things at these locations.  Since then I have been ramping up on my personal research, but also balancing coursework.

I am really excited to be enrolled in Oregon State University’s Free Choice Learning (FCL) series this year.  Everything I was learning “in the field” is now gaining context through courses in personal, sociocultural, and physical dimensions of learning.  I have the opportunity to practice evaluation methods through assignments and read papers related to my research on family group interactions in the museum.  I am thankful that I get to take these classes from Dr. John Falk and Dr. Lynn Dierking, two researchers that have studied FCL for many years!

In the visitor center, we are focused on getting our facial recognition cameras consistently working and capturing data.  We have been collecting images, but getting 11 cameras to stream a lot of data at the same time is challenging as both the hardware and software have to sync.  This has been a great learning opportunity in trial and error, but also learning the “language” of a field I am not familiar with.  As I have to troubleshoot with engineers and software developers, I have been learning vocabulary related not only to the camera system, but also the usability of configuring the cameras and the software.  Beyond the task of setting this up, it is an experience that I will reflect on with future projects that require me to learn the language of another industry, embrace trial and error, and patience in the process.

In addition to Cyberlab duties, I am busy coding video of families using the multi-touch table collected in August 2014.  Over the past twenty years, research on family learning has shown us how exhibits are often used (much of this research was done by Falk, Dierking, Borun, Ellenbogen, among others).  I am curious about the quality of interactions occurring at the touch table between adults and children.  I developed a rubric based on three different dimensions of behaviors – responsive engagement, learning strategies and opportunities, and directive engagement, and whether they are observed at low, moderate, or high levels.  These categories are modified from the types of behavior outlined by Piscitelli and Weier (2002) in relation to adult-child interactions surrounding art.  From their work, they found that a distribution of behaviors from these categories support the value of the interactions (Piscitelli & Weier, 2002).  Each category looks at how the adult(s) and child(ren) interact with each other while manipulating the touch table.  I also modeled the rubric after what is used in the classroom to assess teacher and student interactions around tasks.

An example of a high level of responsive engagement would be that the adults and children are in close proximity to each other while using the exhibit, their hands are on the touch surface for a majority of the time, the adults are using encouraging words and acknowledging the child’s statements or questions, and there similar levels of emotional affect expressed between them.  Learning strategies focuses more on the verbalization of the content of the exhibit and the integration of information, such as connecting the content to prior knowledge or experiences external to exhibit use.  Finally, directive engagement looks at whether the adult is providing guidance or facilitating use of exhibit by directing a task to be performed, showing a child how to accomplish the activity.  From this data, I hope to understand more at depth how the table is used and the ways adults and children interact while using it.  This may give us some idea as to how to support software and content design for these forms of digital interactives, which are becoming more popular in the museum environment.

My goal is to have videos coded by the end of the month, so back to work I go!

Piscitelli, B., & Weier, K. (2002). Learning With, Through, and About Art: The Role of Social Interactions. In S. G. Paris (Ed.), Perspectives on Object-Centered Learning in Museums (pp. 121–151). Mahwah: Lawrence Erlbaum Associates.

Pic_table

The challenges of integrating the natural and social sciences are not news to us. After King, Keohane and Verba’s (KKV’s) book entitled “Designing Social Inquiry”, the field of qualitative methodology has achieved considerable attention and development. Their work generated great discussions about qualitative studies, as well as criticism, and sometimes misguided ideas that qualitative research is benefited by quantitative approaches but not the other way around. Since then, discussions in the literature debate the contrasts between observations of qualitative vs. quantitative studies, regression approaches vs. theoretical work, and the new approaches to mixed-methods design. Nevertheless, there are still many research frontiers for qualitative researchers to cross and significant resistance from existing conservative views of science, which question the validity of qualitative results.

Last week, while participating in the LOICZ symposium (Land-Ocean Interactions in the Coastal Zone) in Rio de Janeiro, Brazil, I was very encouraged by the apparent move towards an integrated approach between the natural and social sciences. There were many important scientists from all over the world and from many different disciplines discussing the Earth systems and contributing steps towards sustainability of the world’s coastal zone. Many of the students’ presentations, including mine, had some social research component. I had many positive conversations about the Cyberlab work in progress and how it sits at the edge of building capacity for scientists/researchers, educators, exhibit designers, civil society, etc.

However, even in this meeting, over dinner conversation, I stumbled into the conflicting views that are a part of the quantitative vs. qualitative debate — the understanding of scientific process as “only hypothesis driven”, where numbers and numbers alone offer the absolute “truth”. It is still a challenge for me not to become extremely frustrated while having to articulate the importance of social science in this case and swim against a current of uneducated opinions about the nature of what we do and disregard for what it ultimately accomplishes. I think it is more than proven in today’s world that understanding the biogeophysics of the Earth’s systems is essential, but that alone won’t solve the problems underlying the interaction of the natural and social worlds.  We cannot move towards a “sustainable future” without the work of social scientists, and I wish there would be more of a consensus about its place and importance within the natural science community.

So, in the spirit of “hard science”…

If I can’t have a research question, here are the null and alternative hypotheses I can investigate:

H0 “Moving towards a sustainable future is not possible without the integration of natural and social sciences”.

H1  “Moving towards a sustainable future is possible without the integration of natural and social science”

Although, empirical research can NEVER prove beyond the shadow of a doubt that a comparison is true (95 and 99% probability only), I think you would agree that, if these hypotheses could be tested, we would fail to reject the null.

With all that being said, I emphasize here today the work Cyberlab is doing and what it will accomplish in the future, sitting at the frontiers of marine science and science education. Exhibits such as the wave laboratory, the climate change exhibit on the works, the research already completed in the lab, the many projects and partnerships, etc. , are  prime examples of that. Cyberlab is contributing to a collaborative effort to the understanding and dissemination of marine and coastal issues, and building capacity to create effective steps towards sustainable land-ocean interactions.

I am very happy to be a part of it!

 

The semester is ending, and as I will be graduating the end of next week, it’s finally sinking in that my time in grad school is coming to a close. The final copy of my dissertation was handed in at the end of the last month, and ever since I have been considering what types of publications I would like to work on while transitioning back in the real world.

Deciding on publications is really more tricky than it seems. I’m trying to find opportunities that reflect my approach as both a researcher and an educator. Of course, my choices will be job dependent (a matter I am still diligently working on) due to time and project constraints, however I have been thinking about writing articles that both highlight the theory I generated around docents in science museum settings, and are able to communicate the practical implications for the field. Myself and Michelle are considering an article together that links our two pieces of work (mine on existing docent practice, hers on training methods), and myself and Susan on interpretation in museums. Both will be equally interesting to pursue. I’d particularly like to write something that is useful to informal science education settings, in terms of docent preparation and interpretive strategies in museum, as I am an advocate for promoting the visibility of free choice learning research to those that develop programming in the field. Just like scientist engagement in education and outreach is an important part of science education, as researchers we are also part of a community that should attempt to engage the free choice learning field in educational research. Outreach works both ways.

What’s interesting about this process is trying to work out which journals are also most fruitful to pursue. I was encouraged by both my committee to attempt to publish in the Journal of Interpretation (National Association for Interpretation), but I have also been thinking about Current (National Marine Educators Association), American Educational Research Journal (American Educational Research Association) and Visitor Studies (Visitor Studies Association), but there are a lot more to consider. It’s a little overwhelming, but also exciting. For me, this is where the rubber hits the road – the avenues where the outcomes of my work can become part of the larger free choice learning community.

In light of the recent posts discussing Positivism vs Interpretivism, Grounded theory approach, and the challenge of thinking about epistemology and ontology, I decided to use this post to continue the debate and share a few things I have been thinking about and doing, that I hope will help me making sense of the paradigmatic views and theoretical approaches that may eventually be a part of my research.

Research design has been a challenging task nonetheless very meaningful process to me, because I am having the chance to dig deep inside into who I am and the personal values, beliefs and goals I carry with me. To start such reflection I referred back to writing exercises, a piece that I remember was topic of the first lab meetings I participated as a member of the group, and that inspired me to find ways to apply different kinds if exercises to research design. As a result of that and of the ongoing advanced qualitative class I am taking right now, my computer file folder entitled “Memos” is growing very quickly as I go through the process of writing my proposal and thinking about my research design.

I am using many forms of memos. I got myself a research “journal” that I am using to register the “brilliant” ideas I come across one way or another during this process, not only ideas for  research goals/ methods/ questions, etc… but also epiphanies  on concepts, theories and how I am making sense of them as they apply to my research. I am carrying it with me everywhere I go because, believe me, ideas pop up unexpectedly in very strange situations. The goal is not to loose track of my thought process as it evolves into a conceptual framework for my research. Saying it bluntly, I want to be able to say clearly why I choose the approach that I choose for my design and how I justify it.

To start this search for my own clarification about where in the world of qualitative research I sit in, which I assumed would inform my methodological choices, I wrote my first memo as a class exercise – a “Researcher Identity Memo”.  It may sound very “elementary” to some of you, but I saw this exercise as opening the doors of my own path through understanding why I seat where I seat right now,  how I came to be here, and where I can potentially go. The memo was a reflective exercise about past experiences in life, upbringing, values and beliefs that I may see connected to the topic of research I choose to investigate, how would I predict that as facilitating or imposing challenge to my work as a researcher. This turned out to be 6 pages document that brought out 3 personas in me that equally influenced my decisions. The educator, the scientist, and a concerned citizen of this world. The synergies between the values, beliefs, experiences, goals and interests of each got me to decided on my research topic (family “affordances” to learning at the touch-tank exhibit at HMSC).

This actually made me rethink my research goals to identify personal, practical and intellectual interests as they combine to answer the “so what?” of my research idea. In fact, “the evolution of my research questions” is another ongoing memo I am working at as my questions emerge, evolve, change, etc. I also have a mini notebook on a key chain attached to my wallet for when those revealing moments happen as I have dialogues with other professional like yourselves or want to write a quick reference to look at later. I think the practice of writing these memos is helping me untangled bits of theoretical debates that I am slowly making sense of , and that are helping me se where I sit.

Now if you are not too fan of writing, if you avoid writing exercises like the plague, Laura suggests to use alternatives ways of registering this moments. She told me she used her phone to record a voice memo the other days. How you do it is not the key issue, but I think it is important that you find a way  that works for you that you can register the evolution of your thought process. Going through a few conversations with Shawn during our weekly meetings, he articulated an approach he thinks I seating on right now for my research. he bursted out these big words together that I am still trying to work trough but that emerged smoothly and almost instantly out of his mind. He called it “Neo-Kantian Post-positivist and Probabilistic Theory of Truth”. I hope he wasn’t tricking me :). Here is the way I see where I stand right now in my less eloquent philosophical terms:

1. Departing from axiological views, I am interested in explanations and descriptions of real meaningful events, why and how questions.

2. Therefore, I am moving from “data to theory”, through inductive questioning

3. As for what is the nature of reality? (ontology), I think I compromise in between objectivity and subjectivity, is there a possible inter-objectivity or inter-subjectivity?

4. As for what counts as reality? (Epistemology), I tend to associate with Social-Cosntructivism.

So,  I using the following schema as a wall decoration in my research room:

Epistemology – Social-Constructivism; Theoretical perspective/ Approach – Interpretivism; Suited Methodology: Grounded Theory.

However, I see myself as open to new topics, ideas. So I am adopting a paradigm but it does not necessarily mean that I will completely oppose combining aspects of other paradigms. I read in a piece of literature once that “sometimes we need a little constructivism, and sometimes we need a little realism”. While I oppose to think radically about it, I do think that it is important to use existing theories critically, and if  you are to be critical you are open to testing (hermeneutics). Here is were I seat in conflict between objectivity and subjectivity, qualitative and quantitative values, and that is why I intend to use mixed methods

I don’t know if this links perfectly to the definition of the approach Shawm saw me taking, But boy, I am happy to be going through this discovery process right now, and memos are really helping me along the way.

Susan

 

 

 

 

I have been coding my qualitative interview data all in one big fell swoop, trying to get everything done for the graduation deadline. It feels almost like a class project that I’ve put off, as usual, longer than I should have. In having a conversation with another grad student, about timelines, and how I’ve been sitting on this data since oh, November or so (at least a good chunk of it), we speculated about why we don’t tackle it in smaller chunks. One reason for me, I’m sure, is just general fear of failure or whatever drives my general procrastinating and perfectionist tendencies (remember, the best dissertation is a DONE dissertation – we’re not here to save the world with this one project).

However, another reason occurs to me as well; I collected all the data myself and I wonder if I was too close to it in the process of collecting it? I certainly had to prioritize finishing collecting it, considering the struggles I had to get subjects to participate, and delays with IRB, etc. But I wonder if it’s actually been better to leave it all for a while and come back to it. I guess if I had really done the interview coding before the eye-tracking, I might have shaped the eye-tracking interviews a bit differently, but I think the main adjustments I made based on the interviews were sufficient without coding (i.e. I recognized how much the experts were just seeing that the images were all the same and I couldn’t come up with difficult enough tasks for them, really). The other reason to have coded the interviews first would have been to separate my interviewees into high- and low-performing, if the data proved to be that way, so that I could invite sub-groups for the eye-tracking. But I ended up, again due to recruitment issues, just getting whoever I could from my interview population to come back. And now, I’m not really sure there’s any high- or low-performers among the novices anyway – they each seem to have their strengths and weaknesses at this task.

Other fun with coding: I have a mix of basically closed-ended questions that I am scoring with a rubric for correctness, and then open-ended “how do you know” semi-clinical interview questions. Since I eventually repeated some of these questions for the various versions of the scaffolded images, my subjects started to conflate their answers and parsing these things apart is truly a pleasure (NOT). And, I’m up to some 120 codes, and keeping those all in mind as I go is just nuts. Of course, I have just done the first pass, and as I created codes as I went through, I have to turn around and re-code for those particular ones on the ones I coded before I created them, but I still am stressing as to whether I’m finding everything in every transcript, especially the sort of obscure codes. I have one that I’ve dubbed “Santa” because two of my subjects referred to knowing the poles of Earth are cold because they learned that Santa lives at the North Pole where it’s cold. So I’m now wondering if there were any other evidences of non-science reasoning that I missed. I don’t think this is a huge problem; I am fairly confident my coding is thorough, but I’m also at that stage of crisis where I’m not sure any of this is good enough as I draw closer to my defense!

Other fun facts: I also find myself agonizing over what to call codes, when the description is more important. And it’s also a very humbling look at how badly I (feel like I) conducted the interviews. For one thing, I asked all the wrong questions, as it turns out – what I expected people would struggle with, they didn’t really, and I didn’t have good questions ready to probe for what they did struggle with. Sigh. I guess that’s for the next experiment.

The good stuff: I do have a lot of good data about people’s expectations of the images and the topics, especially when there are misunderstandings. This will be important as we design new products for outreach, both the images themselves and the supporting info that must go alongside. I also sorta thought I knew a lot about this data going into the coding, but number of new codes with each subject is surprising, and gratifying that maybe I did get some information out of this task after all. Finally, I’m learning that this is an exercise in throwing stuff out, too – I was overly ambitious in my proposal about all the questions I could answer, and I collected a lot more data than I can use at the moment. So, as is a typical part of the research process, I have to choose what fits the story I need to tell to get the dissertation (or paper, or presentation) done for the moment, and leave the rest aside for now. That’s what all those papers post-dissertation are for, I guess!

What are your adventures with/fears about coding or data analysis? (besides putting it off to the last minute, which I don’t recommend).

Part of my thesis project involves semi-structured phone interviews with COASST citizen science volunteers.  I’m patiently awaiting IRB approval for my project, and in the meantime I’ve completed 4 practice interviews with COASST undergraduate interns.  I ended up using the ZOOM H2 recorder, which has a lead with an earpiece microphone.  It worked great!  If anyone needs to do phone interviews, I recommend this audio recorder.  A friend also told me he used the Olympus digital voice recorder (VN 8100PC) for his interviews, which was sometimes tucked into his shirt pocket around a campfire… and he said he could hear everything perfectly!  Just thought I’d share.

Now that I have 4 transcriptions from my practice interviews, I’m getting more familiar with what the heck I’m supposed to do with my interview data once I actually collect it!  I re-read the book Qualitative Data: An Introduction to Coding and Analysis by Auerbach and Silverstein, and organized the practice transcripts into relevant text, repeating ideas, and themes.  I first did this in a Word document, but it seemed a little clunky.  I learned some people use Excel for this too.  Now I’ve downloaded NVivo and am learning my way around that program.  There’s a little bit of a learning curve for me, but I think I’ll really like it once I get the hang of it.  It’s been fun, and admittedly a little intimidating, to work through the mechanics of coding text for the first time.  Luckily for me, I have some great mentors and am getting great advice.  I’m excited to see what I’m able to make of the interview data, and looking forward to using NVivo for other projects I’m working on too!