If you think you get low response rates for research participants at science centers, try recruiting first-and second-year non-science-major undergrads in the summer. So far, since posting my first flyers in May, I have gotten 42 people to even visit the eligibility survey (either by Quick Response/QR code or by tinyurl), and a miserable 2 have completed my interview. I only need 18 participants total!

Since we’re a research outfit, here’s the breakdown of the numbers:

Action Number Percentage of those viewing survey
Visit eligibility survey 42 100
Complete eligibility survey 18 43
Schedule Interview 5 12
Complete Interview 2 5

Between scheduling and completing, I’ve had 2 no shows, and 1 who was actually an engineering major and didn’t read the survey correctly. I figure that of the people who visit the survey and don’t complete it, most figure out they are not eligible (and didn’t read the criteria on the flyer), which is ok.

What is baffling and problematic is the low percentage who complete the survey but then don’t respond to schedule an interview – the dropoff from 18 to 5. I can only figure that they aren’t expecting, don’t find, or don’t connect the Doodle poll I send via email with available time slots. It might go to junk mail, or it may not be clear what the poll is about. There’s a section at the end of the eligibility survey to let folks know there is a doodle poll coming, and I’ve sent it twice to most folks who haven’t responded. I’m not sure what else I can do, short of telephoning people who give me phone numbers. I think that’s my next move, honestly.

Then there’s the no-shows, which is just plain rude. One did email me later and ask to reschedule; that interview did get done. Honestly, this part of “research” is no fun; it’s just frustrating. However, this week is the week before school starting in these parts; I will probably soon set up a table in the Quad with my computer and recruit and schedule people there. Might not solve the no-show problem, but if I can get 100 people scheduled, if half of them no-show, I’ll have a different, much better, problem – cancelling on everyone else! I’m also asking friends who are instructors to let their classes know about the project.

On a side note to our regular readers, as it’s been almost a year of blogging here, we’re refining the schedule a bit. Starting in October, you should see posts about the general Visitor Center research activities by any number of us on Mondays. Wednesdays and Fridays will most often be about student projects for theses and such. Enjoy, and as always, let us know what you think!

 

We’re ready for round 2 of camera placement, having met with Lab advisor Sigrid Norris on Monday. We’ll go back to focusing on the wave- and touch-tank areas and getting full coverage of interactions. Basically, our first test left us spread too thin to really capture what’s going on, and our programmer said face detection and recognition is not robust enough to be able to track visitors through the whole center  yet anyway. Though now of course we’re running out of ethernet ports in the front half of the Visitor Center for those extra cameras.

One thing we had been noticing with the cameras was a lot of footage of “backs and butts” as people walk away from one camera or are facing a different exhibit. Sigrid’s take on this is that it is actually valuable data, capturing multimodal communication modes of posture and foot and body position. This is especially true for peripheral participants, such as group members who are watching more than driving the activity, or other visitors learning how to use exhibits by watching those who are there first.

We did figure out the network issue that was causing the video stoppage/skipping. The cameras had been set up all on the same server and assumed to share the load between the two servers for the system, but they needed to be set up on both servers in order to make the load sharing work. This requires some one-time administrative configuration work on the back end, but the client (what the researchers using the system see) still displays all camera feeds regardless of what server is driving which camera at any given time. So now it’s all hunky dory.

The wave tanks are also getting some redesigns after all the work and testing over the summer. The shore tank wave maker (the “actuator”) won’t be made of aluminum (too soft), and will have hydraulic braking to slow the handle as it reaches the end points. The wave energy tank buoys are getting finished, then that tank will be sealed and used to show electricity generation in houses and buildings set on top. We’ll also get new tables for all three tanks which will lack middle legs and give us probably a bit more space to work with for the final footprint. We’ll get the flooring replaced with wet lab flooring to prevent slip hazards and encourage drainage.

OSU ran three outreach activities at the 46th annual Smithsonian Folklife Festival, and we took the chance to evaluate the Wave Lab’s Mini-Flume wave tank activity, a related but different activity to the wave tanks in the HMSC Visitor Center.

Three activities were selected by the Smithsonian Folklife committee to best represent the diversity of research conducted at OSU, as well as the University’s commitment to sustainable solutions and family education: Tech Wizards, Surimi School, and the O.H. Hinsdale Wave Lab’s Mini-Flume activity. Tech Wizards was set up in the Family Activities area of Folklife, and Surimi School and the Mini-Flume activity shared a tent in the Sustainable Solutions area.

Given the anticipated number of visitors to the festival, and my presence as the project research assistant, we decided it would be a great opportunity to see how well people thought the activity worked, what they might learn, and what they liked or didn’t – core questions in an evaluation. The activity was led by Alicia Lyman-Holt, EOT director at the O.H. Hinsdale Wave Lab, and I developed and spearheaded the evaluation. To make the activity and evaluation happen, we also brought four undergraduate volunteers from OSU and two from Howard University in D.C, plus both the OSU Alumni Association and the festival supplied volunteers on an as-needed basis. We also wanted to try out data collection using iPads and survey software we’re working with in the FCL Lab.

Due to the sheer numbers of people we thought would be there, as well as the divided attentions of everyone, we decided to go with a straightforward survey. We ended up only collecting a small number of what we anticipated due to extreme heat, personnel, and divided attention of visitors – after they spent a lot of time with the activity, they weren’t always interested in sticking around even for a short survey.

I’m currently working on data analysis. Stay tuned for more information on the evaluation, the process, and to learn how we did on the other side of the continent.

After clinical interviews and eye-tracking with my expert and novice subjects, I’m hoping to do a small pilot test of about 3 of the subjects in the functional magnetic resonance imaging (fMRI) scanner. I’m headed to OSU’s sister/rival school the University of Oregon today to talk with my committee member there who is helping with this third prong of my thesis. We don’t have one here in Corvallis as we don’t have much of a neuroscience program, and that is traditionally the department that spearheads such research. The University of Oregon, however, has one, and I was getting down to the details of conducting my experiment there. I’ve been working with Dr. Amy Lobben, who does studies with real-world map-based tasks, a nice fit with the global data visualizations that Shawn has been working on for several years and I came along to continue.

On the agenda was figuring out what they can tell me about IRB requirements, especially the risks part of the protocol. fMRI is actually comparatively harmless; it’s the same technology used to look at other soft tissues, like your shoulder or knee. The scan is a more recent, less invasive form of Positron Emission Technology (PET) scans, which require injection of a radioactive tracer. fMRI simply measures the level of blood flow by looking at properties of oxygen atoms in the brain, which gives an idea of activity levels in different parts of the brain. However, there are even more privacy issues involved since we’re looking at people’s brains, and we have to include some language about how it’s non-diagnostic, and we can’t provide medical advice should we even think something looked unusual (not that I know what really qualifies as unusual looking, which is the point).

Also of interest (always), is how I’m going to fund this. The scans themselves are about $700/hour, and I’ll provide incentives to my participants of maybe $50, plus driving reimbursement of another $50. So for even 3 subjects, we’re talking $2500. I’ve been applying for a couple of doctoral fellowships, which so far haven’t panned out, and am still waiting to hear on an NSF Doctoral Dissertation Research Improvement Grant. The other possibilities are economizing from the budget for other parts of my project I proposed in the HMSC Holt Marine Education award, which I did get ($6000) total, or getting some exploratory collaboration funding from U of O and OSU/Oregon Sea Grant, as this is a novel partnership bringing two new departments together.

But the big thing that came up was experimental design. After much discussion with Dr. Lobben and one of her collaborators, we decided there wasn’t really enough time to pull off a truly interesting study if I’m going to graduate in June. Partly, it was an issue of needing to have more data on my subjects now in order to come up with a good task from my images without more extensive behavioral testing to create stimuli. We decided that it turns out that what we didn’t think would be too broad a question to ask, namely, are these users using different parts of their brains due to training?, would in fact be too overwhelming to try and analyze in the time I have.

So, that means probably coming up with a different angle for the eyetracking to flesh out my thesis a bit more. For one, I will run the eyetracking on more of both populations, students and professors, rather than just a subpopulation of students based on performance, or a subpopulation of students vs. professors. For another, we may actually try some eyetracking “in the wild” with these images on the Magic Planet on the exhibit floor.

In the meantime, I’m back from a long conference trip and finishing up my interviews with professors and rounding up students for the same.

Thursday, I had scheduled 3 faculty interviews, including two back-to-back. I do not recommend this approach, not the least of which reason is that the first of the back-to-back ran longer than most, and I barely had time to write analytical notes to collect my thoughts before I had to make sure everything was ready for the next subject. I also didn’t take time to go to stretch my legs and body after intensely concentrating on what to ask next. If I were really pressed, I probably could have begged a moment for the bathroom once the subject arrived and was reading over the consent form, but all in all, it probably took more out of me than was necessary or maybe even wise.

That said, I did make it through, probably because I’d spent so much time preparing before the first one so I didn’t goof up. I have a checklist of things to do before, during and after the interview, and all of my questions and even example probe questions for follow-up printed out as well. I have instructions for a task that I give them explained on a third sheet, and finally, background questions on a final paper. Most of these things I didn’t realize how much I needed until after or during my pilot interviews, another endorsement for trying out your interview strategy beforehand.

Some of my recommendatiions: make sure the blinds in the room are working. I discovered one really old set that was stuck halfway open, so I called maintenance. They haven’t been fixed yet, but at least they closed them so I don’t get glare on my screen for the images I’m showing. Close the window and door, and post a sign that you’re doing an experiment and what time you’ll be done. Of course, this doesn’t  eliminate all the noise when there’s a large biology class across the hall with their doors open, but it helps a lot. Turn off your cell phone, and remind subjects to do the same. Even though it was on my list, I still forgot once yesterday and got distracted. Check the temperature of the room. Even if you don’t get sun glare, or need to worry about it, the blinds can help keep the room cool when you get intense afternoon sun. Of course, then you have to balance this with the room getting stuffy from being so closed up, again a reason that it would be nice to have some time between interviews to air things out if necessary.

The pace of research often strikes me as wonky. This, I suppose, is true of a lot of fields: some days, you make a lot of progress, and some days very little. A series of very small steps eventually (you hope) lead to a conclusion worthy of sharing with your peers and advancing the field. That means a lot of days of working in the trees without being able to see the forest.

Conferences, with their presentation application deadlines, have a funny way of driving research. I applied for the International Conference on Science Communication back in March and outlined all this data I figured I’d have for my thesis by the time the conference rolls around in the first week of September. Amazingly, I’m on track to have a fairly good amount of data, despite delays due to subject recruitment and IRB approval that I’ve talked about before.

However, now I have another twist in the process. Usually, one can work on the conference presentation almost up until the very moment of the presentation, especially if you get to host the presentation slides on your own laptop. This conference, though, requires me to have my final presentation almost 7 weeks before the actual presentation date. I can only assume this is because the conference, to be held in Nancy, France, is going to be held concurrently in both French and English, and thus, the organizers need this time to be able to translate my slides into French (of which I speak not a word).

In any case, this throws a major wrench into my planned schedule! I am doing fine with the pace, and have about half of my needed faculty interviews arranged (with 25% actually completed!). This deadline this week throws me into a strange dilemma of how to present something interesting, especially the visual data from eyetracking experiments, without actually being able to show them at the conference, as far as I can tell. I figure I will have some results from my actual subjects by the time of the conference, but I don’t know which subjects I will want to choose for that part until all of the interviews have been completed. So my solution will be to run a couple of pilot subjects on just the eyetracking part, without the interview. I’ve recruited one of the folks that works closely with us to be more of an “expert” user, and a member of the science and math teacher licensure master’s program to serve as a “novice.” I’m really excited by what the interviews have revealed so far and am hopeful that the eyetracking pilots will go as well. Crossing my fingers that this will be interesting to the conference attendees, too, with whatever verbal updates I can provide to accompany my slides in September.