If you’re a fan of “Project Runway,” you’re no doubt familiar with Tim Gunn’s signature phrase. He employs this particularly around the point in each week’s process, where the designers have chosen their fabrics and made at least their first efforts at turning their design into reality. It’s at about this time in the process where the designers have to forge ahead or take the last chance to start over and re-conceptualize.

 

 

This week, it feels like that’s where we are with the FCL Lab. We’re about one-and-a-half years into our five years of funding, and about a year behind on technology development. Which means, we’ve got the ideas, and the materials, but haven’t really gotten as far along as we’d like in the actual putting it together.

For us, it’s a bigger problem, too; the development (in this case, the video booth as well as the exhibit itself) is holding up the research. As Shawn put it to me, we’re spending too much time and effort trying to design the perfect task instead of “making it work” with what we have. That is, we’re going to re-conceptualize and do the research we can do with what we have in place, while still going forward with the technology development, of course.

So, for the video booth, that means that we’re not going to wait to be able to analyze what people reflect on during the experience, but take the chance to use what we have, namely a bunch of materials, and analyze the interactions that *are* taking place. We’re not going to wait to make the tsunami task perfect to encourage what we want to see in the video booth. Instead, we’re going to invite several different folks with different research lenses to take a look at the video we get at the tank itself and let us know what types of learning they’re seeing. From there, we can refine what data we want to collect.

It’s an important lesson in grant proposal writing, too: Once you’ve been approved, you don’t have to stick word-for-word to your plan. It can be modified, in ways big and small. In fact, it’s probably better that way.

A reader just asked about our post from nearly a year ago that suggested we’ll start a “jargon board” to define terms that we discuss here on the blog. Where is it?, the reader wanted to know. Well, like many big ideas, sometimes they get dropped in the everyday what’s in front of our faces fire to put out. But astute readers hold us accountable, and for that, we thank you.

So, let’s start that board as a series of posts with the Category: Jargon. With that, let me start with accountability, then. Often, we hear about “being accountable to stakeholders.” Setting aside stakeholders for the moment, what does it mean to “be held accountable”? It can come in various forms,  but most often seems to be providing proof of some sort that you did what you said you would do. TA few weeks ago, for example, a reader asked for the location of the board that we said we would start, and it turns out, we couldn’t provide it (until now). For other times, it may be paying a bill (think of the looming U.S. debt ceiling crisis, in which we are being held accountable for paying bills), or it may be simply providing something (a “deliverable”) on schedule, as when I have to submit my defended and corrected thesis by a particular date in order to graduate this spring, or when you have to turn in a paper to a professor by a certain time in order to get full credit.

In the research world, we are often asked to provide progress reports on a yearly basis to our funders.  Those people or groups to whom we are beholden are one form of stakeholders. They could be the ones holding the purse strings or the ones we’ve committed to delivering an exhibit or evaluation report to as a contractor, making our client the stakeholder. This blog, actually, is the outreach we told the National Science Foundation we’d do to other stakeholders: students, and outreach and research professionals, and serves also as the proof of such outreach. In this case, those stakeholders don’t have any financial interest, but they do want to know what it is we find out, and how we find it out, so we are held accountable via this blog for those two purposes.

All too often accountability is only seen in terms of the consequences of failing to provide proof.

But, I feel like that’s really just scratching the surface of who we’re accountable to, though it gets a lot more murky just how we prove ourselves to those other stakeholders. In fact, even identifying stakeholders thoroughly and completely is a form of proof that often, stakeholders don’t hold us to unless we make a grievous error. As a research assistant, I have obligations to complete the tasks I’m assigned, making me accountable to the project, which is in turn accountable to the funder, which is in turn, accountable to the taxpayers, of which I am one. As part of OSU, we have obligations to perform professionally, and as part of the HMSC Visitor Center, we have obligations to our audience. The network becomes well-entangled very quickly, in fact. Or maybe it’s more like a cross between a Venn diagram and the Russian nesting dolls? In any case, pretty hard to get a handle on. How do you account for your stakeholders, in order to hold yourself or be held accountable? And what other jargon would you like to see discussed here?

I seem to have gone from walking to speed racing when it comes to projects. Not only do I have the Folklife paper I’m co-authoring for ASEE, but now I’m working on 3 more projects. Just last week I was tasked with doing new analysis on already collected data for a paper draft that’s due at the end of the month. So I’ve been slogging through file after file of the data, trying to make sense of it all so that I can get the analysis done by the end of the week. This is the first time I’ve been asked to do data analysis on data that I was not directly connected with collecting. I’ve always been very familiar with the data I was working with, as well as with the project it’s connected to. I have neither of those safety nets on this project, and it is really testing my abilities. Which is both exciting and terrifying. There is no backup plan if I am unable to get this done, so the pressure is really on. Personally I’m not a fan of pressure, I like to have things well laid out in advance with mini-milestones to keep me on track and keep the task from feeling overwhelming.

I just hope I’m able to rise to the challenge without completely freaking out.

And now it comes to this: thesis data analysis. I am doing both qualitative analysis of the interviews and quantitative analysis for the eye-tracking, mostly. However, I will also quantify some of the interview coding and “qualify” the eye-tracking data, mainly while I analyze the paths and orders in which people view the images.

So now the questions become, what exactly am I looking for, and how do I find evidence of it? I have some hypotheses, but they are pretty general at this point. I know that I’m looking for differences between the experts and the non-experts, and among the levels of scaffolding for the non-experts in particular. For the interviews, that means I expect experts will 1) have more correct answers than the non-experts, 2) have different answers from the non-experts about how they know the answers they give, 3) be able to answer all my questions about the images, and 4) have basically similar meaning-making across all levels of scaffolding. This means I have a general idea of where to start coding, but I imagine my code book will change significantly as I go.

With the eye-tracking data, I’ll also be trying to build the model as I go, especially as this analysis is new to our lab. With the help of a former graduate student in the Statistics department, I’ll be starting at the most general differences, again whether the number of fixations (as defined by a minimum dwell time in a maximum diameter area) differ significantly:  1) between experts and non-experts overall with all topics included and all images, 2) between supposedly-maximally-different unscaffolded vs. fully-scaffolded images but with both populations included, and 3) experts looking at unscaffolded vs. non-experts looking at fully-scaffolded images. At this point, I think that there should be significant differences in cases 1 and 2, but hope that, if significant, at least the value of the difference should be smaller in 3, indicating that the non-experts are indeed moving closer to the patterns of experts when given scaffolding. However, this may not reveal itself in the eye-tracking as the populations could make similar meaning as reflected in the interviews but not have the same patterns of eye-movements; that is, it’s possible that the non-experts might be less efficient than experts but still eventually arrive at a better answer with scaffolding than without.

As for the parameters of the eye-tracking, the standard minimum dwell time for a fixation included in our software is 80 ms, and the maximum diameter is 100 pixels, but again, we have no standard for this in the lab so we’ll play around with this and see if results hold up over smaller dwell times or at least smaller diameters, or if they appear. My images are only 800×600 pixels, so a minimal diameter of 1/6th to 1/8th of the image seems rather large. Some of this will be mitigated by the use of areas of interest drawn in the image, where the distance between areas could dictate a smaller minimum diameter, but at this point, all of this remains to be seen and to some extent, the analysis will be very exploratory.

That’s the plan at the moment; what are your thoughts, questions, and/or suggestions?

Last week, Dr. Rowe and I visited Portland Art Museum to help assist with a recruitment push for participants in their Conversations About Art evaluation and I noticed all of the education staff involved have very different styles of how they recruited visitors to participate in the project. Styles ranged from the apologetic (e.g. “do you mind if I interrupt you to help us”), to incentive-focused (e.g. “get free tickets!) to experiential (e.g. “participating will be fun and informative!”)

This got me thinking a lot about  the significance of people skills and a researcher’s recruitment style in educational studies this week. How does the style in which you get participants involved influence a) how many participants you actually recruit, and b) the quality of the participation (i.e. do they just go through the motions to get the freebie incentive?) Thinking back to prior studies of FCL alum here from OSU, I realized that nearly all the researchers I knew had a different approach to recruitment, be it in person, on the phone or via email, and that in fact it is a learned skill that we don’t often talk too much about.

I’ve been grateful for my success at recruiting both docents and visitors for my research on docent-visitor interactions, which is mostly the result of taking the “help a graduate student complete their research” approach – one that I borrowed from interacting with prior Marine Resource Management colleagues of mine, Abby Nickels and Alicia Christensen during their masters research on marine education activities. Such an approach won’t be much help in the future once I finally get out of grad school, so the question to consider is what factors make for successful participant recruitment? It seems the common denominator is people skills, and by people skills I mean the ability to engage a potential recruit on a level that removes skepticism around being commandeered off the street.  You have to be not only trustworthy, but also approachable. I’ve definitely noticed with my own work that on off days where I’m tired and have trouble maintaining a smiley face for long periods of time at the HMSC entrance, recruitment seems harder. All those younger years spent in customer service jobs and learning how to deal with the public in general seem so much more worthwhile!

So fellow researchers and evaluators, my question for you is what are your strategies for recruiting participants? Do you agree people skills are an important underlying factor? Do you over/under estimate your own personal influence on participant recruitment?