That’s the question we’re facing next: what kind of audio systems we need to collect visitor conversations. The mics included on the AXIS cameras that we’re using are built-in to them and just not sensitive enough. Not entirely surprising, given that they’re normally used for video surveillance only (it’s illegal to record audio in security situations), but it does leave us to our own devices to figure something else out. Again.

Of course, we have the same issues as before: limited external power, location – has to be near enough to plug in to a camera to be incorporated into the system, plus now we need at least some of them to be waterproof, which isn’t a common feature of microphones (the cameras are protected by their domes and general housing). We also have to think about directionality; if we come up with something that’s too sensitive, we may have bleed over across several mics, which our software won’t be able to separate. If they’re not sensitive enough if in enough directions, though, we’ll either need a ton of mics (I mean, like 3-4 per camera) or we’ll have a very limited conversation capture area at each exhibit. And any good museum folk know that people don’t stand in one spot and talk, generally!

So we have a couple options that we’re starting with. One is a really messy cheap mic with a lot of wires exposed, which may present an aesthetic issue at the very least, and the other are more expensive models that may or may not be waterproof and more effective. We’re working with collaborators from The Exploratorium on this, but they’ve generally up to now only used audio recording in areas they tucked back from the noisiest parts of the exhibit floor and soundproofed quite a bit besides. They’re looking to expand as they move to their new building in the spring, however, so hopefully by putting our heads together and, as always, testing things boots on the ground, we’ll have some better ideas soon. Especially since we’ve stumped all the more traditional audio specialists we’ve put this problem to so far.

For those of you just joining us, I’m developing a game called Deme for my master’s project. It’s a tactical game that models an ecosystem, and it’s meant primarily for adults. I’m studying how people understand the game’s mechanics in relation to the real world, in an effort to better understand games as learning and meaning-making tools.

I stumbled across Roll20, quite by accident, while reading the PA Report. What I like about Roll20 is the fact that your table session can be shared as a link (apparently—I haven’t started digging yet as I only found out about it a few hours ago). Also, each token can be assigned a hit counter. Damage tracking is something of a hassle in Deme’s current incarnation.

I’ll have more to report after I play around with this for a while. Moving the game from one incarnation and environment to another has forced me to think of it as a system, rather than a product. I want Deme to be portable, and a robust system can be used with just about any tabletop, real or virtual. For an example of a game system, see Wizards of the Coast’s d20 System. The d20 System happens to be a handy model for quantizing events and behaviors—handy enough to inform the data collection framework for our observation systems in the Visitor Center.

Of course, Deme cannot be run single-player as a tabletop game. That’s a double-edged sword. A tabletop game (even a virtual one) is an immediate social experience. A single-player game is a social experience too, but it’s an asynchronous interaction between the developer(s) and the player. I rather like the tabletop approach because each species has a literal voice. The unearthly torrent of resulting qualitative data may be tough to sort out, but I think that’s a good problem to have so long as I know what I’m looking for.

At this phase, the tabletop version is still officially—as much as I can make something official—just a pilot product. I don’t know if it will become something more, but I feel like it deserves a shot.

I have been shuffling through data from the Exploratorium’s scientist-in-residence (SIR) project and I started thinking about what data (and the kinds of ways data) can or should be shared on a blog.  For now, I am going to share a few word clouds of raw data.  These do not illustrate full sentences nor can you tell which participant said what.

Each of these word clouds was based off of a survey question that I wrote and administered.

Visitors to the exhibition space were asked, upon leaving, “What would you tell a friend this space was about?”  The word cloud below contains data from the March residency, which focused on severe storm science (with scientists from NOAA’s National Severe Storm Lab).

 

The Exploratorium Explainers were an integral part of this project.  At the end of the second year I asked all of the Explainers, the Lead Explainers, and the Explainer managers to voluntarily complete the online survey.

Here is how Explainer managers responded to “Describe the impacts of this project on the scientists.”

 

While the Explainer survey was quite long and there is a lot of rich data there, I want to focus on their thoughts about the iPad.  The iPad was incorporated into the exhibition space as a mediating tool (as specified in the grant proposal).  I asked the Explainers “Where and how do you think the iPad was incorporated throughout the project?”  Their response…

 

 

So, what can we gain from word clouds?  It is certainly one way to look at raw data.  Thoughts?

 

If you think you get low response rates for research participants at science centers, try recruiting first-and second-year non-science-major undergrads in the summer. So far, since posting my first flyers in May, I have gotten 42 people to even visit the eligibility survey (either by Quick Response/QR code or by tinyurl), and a miserable 2 have completed my interview. I only need 18 participants total!

Since we’re a research outfit, here’s the breakdown of the numbers:

Action Number Percentage of those viewing survey
Visit eligibility survey 42 100
Complete eligibility survey 18 43
Schedule Interview 5 12
Complete Interview 2 5

Between scheduling and completing, I’ve had 2 no shows, and 1 who was actually an engineering major and didn’t read the survey correctly. I figure that of the people who visit the survey and don’t complete it, most figure out they are not eligible (and didn’t read the criteria on the flyer), which is ok.

What is baffling and problematic is the low percentage who complete the survey but then don’t respond to schedule an interview – the dropoff from 18 to 5. I can only figure that they aren’t expecting, don’t find, or don’t connect the Doodle poll I send via email with available time slots. It might go to junk mail, or it may not be clear what the poll is about. There’s a section at the end of the eligibility survey to let folks know there is a doodle poll coming, and I’ve sent it twice to most folks who haven’t responded. I’m not sure what else I can do, short of telephoning people who give me phone numbers. I think that’s my next move, honestly.

Then there’s the no-shows, which is just plain rude. One did email me later and ask to reschedule; that interview did get done. Honestly, this part of “research” is no fun; it’s just frustrating. However, this week is the week before school starting in these parts; I will probably soon set up a table in the Quad with my computer and recruit and schedule people there. Might not solve the no-show problem, but if I can get 100 people scheduled, if half of them no-show, I’ll have a different, much better, problem – cancelling on everyone else! I’m also asking friends who are instructors to let their classes know about the project.

On a side note to our regular readers, as it’s been almost a year of blogging here, we’re refining the schedule a bit. Starting in October, you should see posts about the general Visitor Center research activities by any number of us on Mondays. Wednesdays and Fridays will most often be about student projects for theses and such. Enjoy, and as always, let us know what you think!

 

We’re ready for round 2 of camera placement, having met with Lab advisor Sigrid Norris on Monday. We’ll go back to focusing on the wave- and touch-tank areas and getting full coverage of interactions. Basically, our first test left us spread too thin to really capture what’s going on, and our programmer said face detection and recognition is not robust enough to be able to track visitors through the whole center  yet anyway. Though now of course we’re running out of ethernet ports in the front half of the Visitor Center for those extra cameras.

One thing we had been noticing with the cameras was a lot of footage of “backs and butts” as people walk away from one camera or are facing a different exhibit. Sigrid’s take on this is that it is actually valuable data, capturing multimodal communication modes of posture and foot and body position. This is especially true for peripheral participants, such as group members who are watching more than driving the activity, or other visitors learning how to use exhibits by watching those who are there first.

We did figure out the network issue that was causing the video stoppage/skipping. The cameras had been set up all on the same server and assumed to share the load between the two servers for the system, but they needed to be set up on both servers in order to make the load sharing work. This requires some one-time administrative configuration work on the back end, but the client (what the researchers using the system see) still displays all camera feeds regardless of what server is driving which camera at any given time. So now it’s all hunky dory.

The wave tanks are also getting some redesigns after all the work and testing over the summer. The shore tank wave maker (the “actuator”) won’t be made of aluminum (too soft), and will have hydraulic braking to slow the handle as it reaches the end points. The wave energy tank buoys are getting finished, then that tank will be sealed and used to show electricity generation in houses and buildings set on top. We’ll also get new tables for all three tanks which will lack middle legs and give us probably a bit more space to work with for the final footprint. We’ll get the flooring replaced with wet lab flooring to prevent slip hazards and encourage drainage.

After clinical interviews and eye-tracking with my expert and novice subjects, I’m hoping to do a small pilot test of about 3 of the subjects in the functional magnetic resonance imaging (fMRI) scanner. I’m headed to OSU’s sister/rival school the University of Oregon today to talk with my committee member there who is helping with this third prong of my thesis. We don’t have one here in Corvallis as we don’t have much of a neuroscience program, and that is traditionally the department that spearheads such research. The University of Oregon, however, has one, and I was getting down to the details of conducting my experiment there. I’ve been working with Dr. Amy Lobben, who does studies with real-world map-based tasks, a nice fit with the global data visualizations that Shawn has been working on for several years and I came along to continue.

On the agenda was figuring out what they can tell me about IRB requirements, especially the risks part of the protocol. fMRI is actually comparatively harmless; it’s the same technology used to look at other soft tissues, like your shoulder or knee. The scan is a more recent, less invasive form of Positron Emission Technology (PET) scans, which require injection of a radioactive tracer. fMRI simply measures the level of blood flow by looking at properties of oxygen atoms in the brain, which gives an idea of activity levels in different parts of the brain. However, there are even more privacy issues involved since we’re looking at people’s brains, and we have to include some language about how it’s non-diagnostic, and we can’t provide medical advice should we even think something looked unusual (not that I know what really qualifies as unusual looking, which is the point).

Also of interest (always), is how I’m going to fund this. The scans themselves are about $700/hour, and I’ll provide incentives to my participants of maybe $50, plus driving reimbursement of another $50. So for even 3 subjects, we’re talking $2500. I’ve been applying for a couple of doctoral fellowships, which so far haven’t panned out, and am still waiting to hear on an NSF Doctoral Dissertation Research Improvement Grant. The other possibilities are economizing from the budget for other parts of my project I proposed in the HMSC Holt Marine Education award, which I did get ($6000) total, or getting some exploratory collaboration funding from U of O and OSU/Oregon Sea Grant, as this is a novel partnership bringing two new departments together.

But the big thing that came up was experimental design. After much discussion with Dr. Lobben and one of her collaborators, we decided there wasn’t really enough time to pull off a truly interesting study if I’m going to graduate in June. Partly, it was an issue of needing to have more data on my subjects now in order to come up with a good task from my images without more extensive behavioral testing to create stimuli. We decided that it turns out that what we didn’t think would be too broad a question to ask, namely, are these users using different parts of their brains due to training?, would in fact be too overwhelming to try and analyze in the time I have.

So, that means probably coming up with a different angle for the eyetracking to flesh out my thesis a bit more. For one, I will run the eyetracking on more of both populations, students and professors, rather than just a subpopulation of students based on performance, or a subpopulation of students vs. professors. For another, we may actually try some eyetracking “in the wild” with these images on the Magic Planet on the exhibit floor.

In the meantime, I’m back from a long conference trip and finishing up my interviews with professors and rounding up students for the same.