About Katie Stofer

Research Assistant Professor, STEM Education and Outreach, University of Florida PhD, Oregon State University Free-Choice Learning Lab

The network did turn out to be the cause of most of our camera skipping. We had all 25+ cameras running on MJPEG, which was driving our network usage through the roof at almost 10MB/sec per camera on a 100MB pipe. We did have to have the Convergint tech come out to help figure it out, and re-configure a few things, with some small hints.

First, we switched some of the cameras to H.264 when we were ok with a slightly less crisp picture, like our establishment shots to follow how people move from exhibit to exhibit. This drops the network usage to less than 1MB/sec per camera, though it does drive the CPU usage up a bit. That’s a fair tradeoff because our computers are dedicated to this video processing.

We also set up user accounts on the slave server as well as the master, which allowed us to spread the cameras across the two to distribute usage, and are working with our IT folks to bridge the servers to spread the load amongst the four that we have, so that even if we are driving nearly the full network usage, we are doing it on four servers instead of one. Finally, we put the live video on a different drive, also freeing up processing power.

Just a few tips and tweaks seem to have given us much smoother playback. Now to get more IP addresses from campus to spread the network load even further as we think ahead to more cameras.

If you think you get low response rates for research participants at science centers, try recruiting first-and second-year non-science-major undergrads in the summer. So far, since posting my first flyers in May, I have gotten 42 people to even visit the eligibility survey (either by Quick Response/QR code or by tinyurl), and a miserable 2 have completed my interview. I only need 18 participants total!

Since we’re a research outfit, here’s the breakdown of the numbers:

Action Number Percentage of those viewing survey
Visit eligibility survey 42 100
Complete eligibility survey 18 43
Schedule Interview 5 12
Complete Interview 2 5

Between scheduling and completing, I’ve had 2 no shows, and 1 who was actually an engineering major and didn’t read the survey correctly. I figure that of the people who visit the survey and don’t complete it, most figure out they are not eligible (and didn’t read the criteria on the flyer), which is ok.

What is baffling and problematic is the low percentage who complete the survey but then don’t respond to schedule an interview – the dropoff from 18 to 5. I can only figure that they aren’t expecting, don’t find, or don’t connect the Doodle poll I send via email with available time slots. It might go to junk mail, or it may not be clear what the poll is about. There’s a section at the end of the eligibility survey to let folks know there is a doodle poll coming, and I’ve sent it twice to most folks who haven’t responded. I’m not sure what else I can do, short of telephoning people who give me phone numbers. I think that’s my next move, honestly.

Then there’s the no-shows, which is just plain rude. One did email me later and ask to reschedule; that interview did get done. Honestly, this part of “research” is no fun; it’s just frustrating. However, this week is the week before school starting in these parts; I will probably soon set up a table in the Quad with my computer and recruit and schedule people there. Might not solve the no-show problem, but if I can get 100 people scheduled, if half of them no-show, I’ll have a different, much better, problem – cancelling on everyone else! I’m also asking friends who are instructors to let their classes know about the project.

On a side note to our regular readers, as it’s been almost a year of blogging here, we’re refining the schedule a bit. Starting in October, you should see posts about the general Visitor Center research activities by any number of us on Mondays. Wednesdays and Fridays will most often be about student projects for theses and such. Enjoy, and as always, let us know what you think!

 

We’re ready for round 2 of camera placement, having met with Lab advisor Sigrid Norris on Monday. We’ll go back to focusing on the wave- and touch-tank areas and getting full coverage of interactions. Basically, our first test left us spread too thin to really capture what’s going on, and our programmer said face detection and recognition is not robust enough to be able to track visitors through the whole center  yet anyway. Though now of course we’re running out of ethernet ports in the front half of the Visitor Center for those extra cameras.

One thing we had been noticing with the cameras was a lot of footage of “backs and butts” as people walk away from one camera or are facing a different exhibit. Sigrid’s take on this is that it is actually valuable data, capturing multimodal communication modes of posture and foot and body position. This is especially true for peripheral participants, such as group members who are watching more than driving the activity, or other visitors learning how to use exhibits by watching those who are there first.

We did figure out the network issue that was causing the video stoppage/skipping. The cameras had been set up all on the same server and assumed to share the load between the two servers for the system, but they needed to be set up on both servers in order to make the load sharing work. This requires some one-time administrative configuration work on the back end, but the client (what the researchers using the system see) still displays all camera feeds regardless of what server is driving which camera at any given time. So now it’s all hunky dory.

The wave tanks are also getting some redesigns after all the work and testing over the summer. The shore tank wave maker (the “actuator”) won’t be made of aluminum (too soft), and will have hydraulic braking to slow the handle as it reaches the end points. The wave energy tank buoys are getting finished, then that tank will be sealed and used to show electricity generation in houses and buildings set on top. We’ll also get new tables for all three tanks which will lack middle legs and give us probably a bit more space to work with for the final footprint. We’ll get the flooring replaced with wet lab flooring to prevent slip hazards and encourage drainage.

After clinical interviews and eye-tracking with my expert and novice subjects, I’m hoping to do a small pilot test of about 3 of the subjects in the functional magnetic resonance imaging (fMRI) scanner. I’m headed to OSU’s sister/rival school the University of Oregon today to talk with my committee member there who is helping with this third prong of my thesis. We don’t have one here in Corvallis as we don’t have much of a neuroscience program, and that is traditionally the department that spearheads such research. The University of Oregon, however, has one, and I was getting down to the details of conducting my experiment there. I’ve been working with Dr. Amy Lobben, who does studies with real-world map-based tasks, a nice fit with the global data visualizations that Shawn has been working on for several years and I came along to continue.

On the agenda was figuring out what they can tell me about IRB requirements, especially the risks part of the protocol. fMRI is actually comparatively harmless; it’s the same technology used to look at other soft tissues, like your shoulder or knee. The scan is a more recent, less invasive form of Positron Emission Technology (PET) scans, which require injection of a radioactive tracer. fMRI simply measures the level of blood flow by looking at properties of oxygen atoms in the brain, which gives an idea of activity levels in different parts of the brain. However, there are even more privacy issues involved since we’re looking at people’s brains, and we have to include some language about how it’s non-diagnostic, and we can’t provide medical advice should we even think something looked unusual (not that I know what really qualifies as unusual looking, which is the point).

Also of interest (always), is how I’m going to fund this. The scans themselves are about $700/hour, and I’ll provide incentives to my participants of maybe $50, plus driving reimbursement of another $50. So for even 3 subjects, we’re talking $2500. I’ve been applying for a couple of doctoral fellowships, which so far haven’t panned out, and am still waiting to hear on an NSF Doctoral Dissertation Research Improvement Grant. The other possibilities are economizing from the budget for other parts of my project I proposed in the HMSC Holt Marine Education award, which I did get ($6000) total, or getting some exploratory collaboration funding from U of O and OSU/Oregon Sea Grant, as this is a novel partnership bringing two new departments together.

But the big thing that came up was experimental design. After much discussion with Dr. Lobben and one of her collaborators, we decided there wasn’t really enough time to pull off a truly interesting study if I’m going to graduate in June. Partly, it was an issue of needing to have more data on my subjects now in order to come up with a good task from my images without more extensive behavioral testing to create stimuli. We decided that it turns out that what we didn’t think would be too broad a question to ask, namely, are these users using different parts of their brains due to training?, would in fact be too overwhelming to try and analyze in the time I have.

So, that means probably coming up with a different angle for the eyetracking to flesh out my thesis a bit more. For one, I will run the eyetracking on more of both populations, students and professors, rather than just a subpopulation of students based on performance, or a subpopulation of students vs. professors. For another, we may actually try some eyetracking “in the wild” with these images on the Magic Planet on the exhibit floor.

In the meantime, I’m back from a long conference trip and finishing up my interviews with professors and rounding up students for the same.

Katie Woollven tells us about how she’s learning more about getting everyone DOING science research, aka Citizen Science or Public Participation in Science Research:

“I’ve been interested in Citizen Science research since I began my grad program, so I was really excited to attend the Public Participation in Scientific Research (PPSR) conference Aug 4-5 in Portland. The speakers were great, and it was nice to see how my questions about citizen science fit with the current research in this field.

Although public participation has always been important to science throughout history and is NOT new, the field of research on citizen science IS relatively new, and is somewhat disjointed. Researchers in this field lack a common language (prime example: should we call it PPSR? or citizen science?), which makes it difficult to stay abreast of the latest research. There have been calls for a national PPSR organization, one of the conference goals was to get feedback from people in the field about what they would want that organization to do.

One of my favorite talks was from Heidi Ballard of UC Davis, who is interested in all the possible individual, programmatic, and especially community-level outcomes of PPSR projects. She asked questions about the degree and quality of participation, such as: Who participates in these projects, and in what parts of the scientific process? Whose interests are being served, and to what end? Who makes the decisions, and who has the power?

Another interesting part of Heidi’s talk was when she touched on the relative strengths of the 3 models of PPSR projects. Citizen science projects can be divided into 3 categories (see the 2009 CAISE report): contributory (generally designed by scientists, and participants collect data), collaborative (also designed by scientists, but participants may be involved in project design, data analysis, or communicating results), and co-created (designed by scientists and participants, and some participants are involved in all steps of the scientific process). I found this part fascinating, because I think learning from the strengths of all 3 models can make any program more successful. And of course, learning about different citizen science projects during the poster sessions was really exciting! Below are a few of my favorites.

PolarTREC- K-12 teachers go on a 2-6 week science research expedition in a polar region, and then share the experience with their classroom. I think this is really interesting because of the motivational aspect of kids participating in (and according to Sarah Crowley, even improving) authentic scientific research.

Port Townsend Marine Science Center Plastics Project– Volunteers sample beaches for micro-plastics around the US Salish Sea. I’ve heard a lot about this center, and the strength of their volunteer base is amazing.

Nature Research Center, North Carolina Museum of Natural Sciences– I really want to visit this museum! Visitors can engage in the scientific process on the museum floor, in one case by making observations on video feed from a field station.”

Conference talks, poster abstracts, and videos

Katie Woollven is in the Marine Resource Management program, focusing on Marine Education.

ed. note – apologies for the sporadic postings these last few days. Katie Stofer has been out of town, and things weren’t quite as well set up for other lab members to start posting themselves.

Pulling it all together and making sense of things proves one of the hardest tasks for Julie:

“I can’t believe this summer is about over.  I only have 3 days left at Hatfield.  Those 3 days will be filled with frantic work getting the rest of my exhibit proposal pulled together as well as my Sea Grant portfolio and presentation done for Friday.  I go home Saturday morning and I haven’t even figured out when I’m going to pack.  Eek.

But back to the point at hand.  Doing social science has been such a fun experience.  I really loved talking to people to get their feedback and opinions on Climate Change and the exhibit.  I’m so excited for this exhibit.  I want it to be fantastic and I’ve been working very hard on it.  I am stoked to visit next summer to see it in the flesh!

One thing that I find really challenging about doing this kind of research though, is pulling together the data and putting it into a readable format for something like my End of Summer Final Presentation on Friday!  The big survey I did, for instance, was 16 questions and the data collected is very qualitative and doesn’t fit neatly into a table on a power point slide.  So I have to determine which things to pull out to show and exactly how to do it.  I feel confident that I’ll get it down, it’s just going to perhaps rob me of some sleep the next couple days.

Today (Tuesday) I finally got to do something that I should’ve done long ago.  Mark took me into the “spy room” as some call it and showed me all the awesome video footage being recorded in the visitor center.  It’s really incredible!  I was able to download a few videos of myself interpreting at the touch tank which Mark suggested would be a good addition to my portfolio.  Now I feel like a real member of the Free Choice Learning crew.”

This summer has given me a wealth of experiences that will really benefit my future…I can’t wait to see what that future holds.