That’s the question we’re facing next: what kind of audio systems we need to collect visitor conversations. The mics included on the AXIS cameras that we’re using are built-in to them and just not sensitive enough. Not entirely surprising, given that they’re normally used for video surveillance only (it’s illegal to record audio in security situations), but it does leave us to our own devices to figure something else out. Again.

Of course, we have the same issues as before: limited external power, location – has to be near enough to plug in to a camera to be incorporated into the system, plus now we need at least some of them to be waterproof, which isn’t a common feature of microphones (the cameras are protected by their domes and general housing). We also have to think about directionality; if we come up with something that’s too sensitive, we may have bleed over across several mics, which our software won’t be able to separate. If they’re not sensitive enough if in enough directions, though, we’ll either need a ton of mics (I mean, like 3-4 per camera) or we’ll have a very limited conversation capture area at each exhibit. And any good museum folk know that people don’t stand in one spot and talk, generally!

So we have a couple options that we’re starting with. One is a really messy cheap mic with a lot of wires exposed, which may present an aesthetic issue at the very least, and the other are more expensive models that may or may not be waterproof and more effective. We’re working with collaborators from The Exploratorium on this, but they’ve generally up to now only used audio recording in areas they tucked back from the noisiest parts of the exhibit floor and soundproofed quite a bit besides. They’re looking to expand as they move to their new building in the spring, however, so hopefully by putting our heads together and, as always, testing things boots on the ground, we’ll have some better ideas soon. Especially since we’ve stumped all the more traditional audio specialists we’ve put this problem to so far.

The network did turn out to be the cause of most of our camera skipping. We had all 25+ cameras running on MJPEG, which was driving our network usage through the roof at almost 10MB/sec per camera on a 100MB pipe. We did have to have the Convergint tech come out to help figure it out, and re-configure a few things, with some small hints.

First, we switched some of the cameras to H.264 when we were ok with a slightly less crisp picture, like our establishment shots to follow how people move from exhibit to exhibit. This drops the network usage to less than 1MB/sec per camera, though it does drive the CPU usage up a bit. That’s a fair tradeoff because our computers are dedicated to this video processing.

We also set up user accounts on the slave server as well as the master, which allowed us to spread the cameras across the two to distribute usage, and are working with our IT folks to bridge the servers to spread the load amongst the four that we have, so that even if we are driving nearly the full network usage, we are doing it on four servers instead of one. Finally, we put the live video on a different drive, also freeing up processing power.

Just a few tips and tweaks seem to have given us much smoother playback. Now to get more IP addresses from campus to spread the network load even further as we think ahead to more cameras.

We’re ready for round 2 of camera placement, having met with Lab advisor Sigrid Norris on Monday. We’ll go back to focusing on the wave- and touch-tank areas and getting full coverage of interactions. Basically, our first test left us spread too thin to really capture what’s going on, and our programmer said face detection and recognition is not robust enough to be able to track visitors through the whole center  yet anyway. Though now of course we’re running out of ethernet ports in the front half of the Visitor Center for those extra cameras.

One thing we had been noticing with the cameras was a lot of footage of “backs and butts” as people walk away from one camera or are facing a different exhibit. Sigrid’s take on this is that it is actually valuable data, capturing multimodal communication modes of posture and foot and body position. This is especially true for peripheral participants, such as group members who are watching more than driving the activity, or other visitors learning how to use exhibits by watching those who are there first.

We did figure out the network issue that was causing the video stoppage/skipping. The cameras had been set up all on the same server and assumed to share the load between the two servers for the system, but they needed to be set up on both servers in order to make the load sharing work. This requires some one-time administrative configuration work on the back end, but the client (what the researchers using the system see) still displays all camera feeds regardless of what server is driving which camera at any given time. So now it’s all hunky dory.

The wave tanks are also getting some redesigns after all the work and testing over the summer. The shore tank wave maker (the “actuator”) won’t be made of aluminum (too soft), and will have hydraulic braking to slow the handle as it reaches the end points. The wave energy tank buoys are getting finished, then that tank will be sealed and used to show electricity generation in houses and buildings set on top. We’ll also get new tables for all three tanks which will lack middle legs and give us probably a bit more space to work with for the final footprint. We’ll get the flooring replaced with wet lab flooring to prevent slip hazards and encourage drainage.

After clinical interviews and eye-tracking with my expert and novice subjects, I’m hoping to do a small pilot test of about 3 of the subjects in the functional magnetic resonance imaging (fMRI) scanner. I’m headed to OSU’s sister/rival school the University of Oregon today to talk with my committee member there who is helping with this third prong of my thesis. We don’t have one here in Corvallis as we don’t have much of a neuroscience program, and that is traditionally the department that spearheads such research. The University of Oregon, however, has one, and I was getting down to the details of conducting my experiment there. I’ve been working with Dr. Amy Lobben, who does studies with real-world map-based tasks, a nice fit with the global data visualizations that Shawn has been working on for several years and I came along to continue.

On the agenda was figuring out what they can tell me about IRB requirements, especially the risks part of the protocol. fMRI is actually comparatively harmless; it’s the same technology used to look at other soft tissues, like your shoulder or knee. The scan is a more recent, less invasive form of Positron Emission Technology (PET) scans, which require injection of a radioactive tracer. fMRI simply measures the level of blood flow by looking at properties of oxygen atoms in the brain, which gives an idea of activity levels in different parts of the brain. However, there are even more privacy issues involved since we’re looking at people’s brains, and we have to include some language about how it’s non-diagnostic, and we can’t provide medical advice should we even think something looked unusual (not that I know what really qualifies as unusual looking, which is the point).

Also of interest (always), is how I’m going to fund this. The scans themselves are about $700/hour, and I’ll provide incentives to my participants of maybe $50, plus driving reimbursement of another $50. So for even 3 subjects, we’re talking $2500. I’ve been applying for a couple of doctoral fellowships, which so far haven’t panned out, and am still waiting to hear on an NSF Doctoral Dissertation Research Improvement Grant. The other possibilities are economizing from the budget for other parts of my project I proposed in the HMSC Holt Marine Education award, which I did get ($6000) total, or getting some exploratory collaboration funding from U of O and OSU/Oregon Sea Grant, as this is a novel partnership bringing two new departments together.

But the big thing that came up was experimental design. After much discussion with Dr. Lobben and one of her collaborators, we decided there wasn’t really enough time to pull off a truly interesting study if I’m going to graduate in June. Partly, it was an issue of needing to have more data on my subjects now in order to come up with a good task from my images without more extensive behavioral testing to create stimuli. We decided that it turns out that what we didn’t think would be too broad a question to ask, namely, are these users using different parts of their brains due to training?, would in fact be too overwhelming to try and analyze in the time I have.

So, that means probably coming up with a different angle for the eyetracking to flesh out my thesis a bit more. For one, I will run the eyetracking on more of both populations, students and professors, rather than just a subpopulation of students based on performance, or a subpopulation of students vs. professors. For another, we may actually try some eyetracking “in the wild” with these images on the Magic Planet on the exhibit floor.

In the meantime, I’m back from a long conference trip and finishing up my interviews with professors and rounding up students for the same.

Well, not literal ghosts, but blank spots. It seems we may be facing our first serious bandwidth issues with 28 cameras installed and plenty of summer visitors. Whatever the reason, we’re getting hiccups in our getalongs – cameras are randomly freezing for a few seconds to several minutes each, losing connection with the system, and generally not behaving correctly.

Today, for example, we were collecting images of ourselves from both the video cameras and a still digital camera for comparison of performance for facial recognition. As Harrison, Mark, and Diana moved from right to left along our touch tanks, only one of three close-up “interaction” cameras that they stopped at actually picked them up. It’s not a case of them actually moving elsewhere, because we see them on the overhead “establishment” cameras. It’s not a case of the cameras not recording due to motion sensing issues (we think), because in one of the two missing shots, there was a family interacting with the touch tank for a few minutes before the staff trio came up behind them.

This morning I also discovered a lot of footage missing from today’s feeds, from cameras that I swear I saw on earlier. I’ve been sitting at the monitoring station pulling clips for upcoming presentations and for the facial recognition testing, and I see the latest footage of some of the octopus tank cameras showing as dimly lit 5 a.m. footage. It’s not a problem with synchronization, either (I think): the corresponding bar image on the viewer that shows a simple map of recording times across multiple cameras shows blanks for those times, when I was watching families on them earlier today. However, when I look at it now, hours later, there don’t seem to be nearly as big of gaps as I saw this morning, meaning this mornings viewing while recording might have just delayed playback for some of the recent-but-not-most-immediately-recent footage at that time, but the system cached it and caught up later.

Is it because some of the cameras are network-powered and some are plugged in? Is it because the motion sensitivity is light-sensitive, wherein some cameras that have too much light have a harder time sensing motion, or the motion sensitivity is based on depth-of-field and the action we want is too far afield? Maybe it’s a combination of trying to view footage while it’s being recorded and bandwidth issues and motion-sensitivity issues, but it ain’t pretty.

Summer Sea Grant Scholar Julie catches us up on her prototyping for the climate change exhibit:

“Would you like to take a survey?”  Yes, I have said that very phrase or a variation of it many times this week.  I have talked to more than 50 people and received some good feedback for my exhibit.  I also began working on my exhibit proposal and visuals to go along with it.  This is so fun!  I love that I get to create this, and my proposal will be used to pitch the plan to whatever company they get to make the exhibit program.  How sweet is that?

So, the plan is to have a big multi-touch table – here is what it looks like, from the ideum website:

 

You can’t see very well from that picture but people can grab photos or videos or other digital objects, resize and move them around and place them wherever they want using swipe, pinch, and other gestures as with tablets and multitouch smartphones.  It allows multiple users to surround the table as well and work together or independently. This is a video showing this table tested here at Hatfield- it has a lot of narration about Free Choice Learning, and you can see the table in action a little bit.

People will be able to learn about climate change and then create their own “story” about what they think is important about climate change or global warming.  My concept of the interface for this has gone through a metamorphosis.  Here are the various transformations the interface has gone through:

Stage 1: My initial messy drawing to get my thoughts on paper and make sure I was on the same page with the exhibit team.  At this point I thought we would just have a simple touch screen kiosk.

 

Stage 2: Mock-up made by Allison the graphic designer, using stage 1 as a guide.  I showed this to people as I interviewed them so they’d have an idea of what the heck I was talking about.

 

Stage 3: My own digital version I’m currently working on, now more in sync with the touch table.  The final version will go into my exhibit proposal.

 

Here’s what it looks like with a folder opened – upon touching a file, an animation would show the file opening and spilling the contents on the workspace to end up kind of like this:

 

This is a very exciting project to work on, and I’m glad to get to use and hone my skills in creativity, organization, and attention to detail.  This exhibit proposal will certainly need a lot of all 3 of those things.  It’s also very interesting to interview people- I find my preconceptions dashed often, which is very refreshing.  And it’s great to be able to tailor the exhibit to several different audiences, in hopes that the message will be well received by all, no matter where they currently stand in relation to the issue of climate change/ global warming.  Talking with folks helps me know for sure what kind of material each group wants, so I can maximize the success of the exhibit with that group.  I can’t wait to see this thing in the flesh – I have already decided I will have to take a vacation out here next summer just to check it out!