A new partnership with the Philomath (pronounced fill-OH-muth for you out-of-town readers) High Robotics Engineering Division (PHRED) helped the HMSC Free-Choice learning lab overcome a major information design hurdle. An on-going challenge for our observation system is recording usage of small, non-electronic moveable exhibit components – think bones, shells, levers, and spinning wheels.

PHRED mentors Tom Health and Tom Thompson will work with students to develop tiny, wireless microprocessor sensors that can be attached to any physical moving exhibit component and report its use to our database. The team will be using the popular Arduino development tool that has been the technological heart of the Maker movement.

This is a great partnership – the PHRED team has all the skills, enthusiasm, and creativity to tackle the project and build successful tools – not to mention gaining the notoriety that comes from working on an NSF-funded project. Oregon Sea Grant gains more experience integrating after school science clubs into funded research projects, while meeting the ever-challenging objective of engaging underserved communities.
Thanks to Mark for this update. 

In the last couple of weeks Katie and I have been testing some options for capturing better quality visitor conversation for the camera system using external mics.

As Katie mentioned last month, each camera’s built-in microphones are proving to be a little unfruitful in capturing good quality audio for the eventual voice recognition system in “hot-spot” areas such as the touch tanks and front desk. As a result, we purchased some pre-amplified omni-directional microphones and set about testing their placement and audio quality in these areas. This has been no easy process, as the temporary wiring we put in place to hook the mics to the cameras is  not as aesthetically pleasing in a public setting as one might hope, and we discovered that the fake touch tank rocks are duct-tape’s arch enemy. Plus the mics have been put through their paces through various visitor kicks, bumps and water splashes.

As well as the issue of keeping the mics in place, testing has also meant a steep learning curve about mic level adjustment. When we initially wired them up, I adjusted each mic (via a mixer) one by one to reduce “crackly” noises and distortion during loud conversations. However, I later realized the adjustment overlooked necessary camera audio setup changes, and gain adjustments, affecting just how close a visitor has to get to one of the mics to actually hear them, particularly over the constant noise of running water around tanks.

So today I am embarking on a technical adventure. Wearing wireless headphones and brandishing a flathead screwdriver, I am going to reset all the relevant cameras’ audio settings to a zero gain, adjust the mic levels for mic balance (there are multiple mics per camera) rather than crackly noises, and adjust the gain until the sample audio I pull from the camera system comes out cleaner. I’m not expecting to output audio with the clarity of a seastar squeak, but I will attempt to get output that allows us to capture focal areas of clear conversation, even with the quietest of visitors. Avast me hearties, I be a sound buccaneer!

That’s the question we’re facing next: what kind of audio systems we need to collect visitor conversations. The mics included on the AXIS cameras that we’re using are built-in to them and just not sensitive enough. Not entirely surprising, given that they’re normally used for video surveillance only (it’s illegal to record audio in security situations), but it does leave us to our own devices to figure something else out. Again.

Of course, we have the same issues as before: limited external power, location – has to be near enough to plug in to a camera to be incorporated into the system, plus now we need at least some of them to be waterproof, which isn’t a common feature of microphones (the cameras are protected by their domes and general housing). We also have to think about directionality; if we come up with something that’s too sensitive, we may have bleed over across several mics, which our software won’t be able to separate. If they’re not sensitive enough if in enough directions, though, we’ll either need a ton of mics (I mean, like 3-4 per camera) or we’ll have a very limited conversation capture area at each exhibit. And any good museum folk know that people don’t stand in one spot and talk, generally!

So we have a couple options that we’re starting with. One is a really messy cheap mic with a lot of wires exposed, which may present an aesthetic issue at the very least, and the other are more expensive models that may or may not be waterproof and more effective. We’re working with collaborators from The Exploratorium on this, but they’ve generally up to now only used audio recording in areas they tucked back from the noisiest parts of the exhibit floor and soundproofed quite a bit besides. They’re looking to expand as they move to their new building in the spring, however, so hopefully by putting our heads together and, as always, testing things boots on the ground, we’ll have some better ideas soon. Especially since we’ve stumped all the more traditional audio specialists we’ve put this problem to so far.

Summer Sea Grant Scholar Julie catches us up on her prototyping for the climate change exhibit:

“Would you like to take a survey?”  Yes, I have said that very phrase or a variation of it many times this week.  I have talked to more than 50 people and received some good feedback for my exhibit.  I also began working on my exhibit proposal and visuals to go along with it.  This is so fun!  I love that I get to create this, and my proposal will be used to pitch the plan to whatever company they get to make the exhibit program.  How sweet is that?

So, the plan is to have a big multi-touch table – here is what it looks like, from the ideum website:

 

You can’t see very well from that picture but people can grab photos or videos or other digital objects, resize and move them around and place them wherever they want using swipe, pinch, and other gestures as with tablets and multitouch smartphones.  It allows multiple users to surround the table as well and work together or independently. This is a video showing this table tested here at Hatfield- it has a lot of narration about Free Choice Learning, and you can see the table in action a little bit.

People will be able to learn about climate change and then create their own “story” about what they think is important about climate change or global warming.  My concept of the interface for this has gone through a metamorphosis.  Here are the various transformations the interface has gone through:

Stage 1: My initial messy drawing to get my thoughts on paper and make sure I was on the same page with the exhibit team.  At this point I thought we would just have a simple touch screen kiosk.

 

Stage 2: Mock-up made by Allison the graphic designer, using stage 1 as a guide.  I showed this to people as I interviewed them so they’d have an idea of what the heck I was talking about.

 

Stage 3: My own digital version I’m currently working on, now more in sync with the touch table.  The final version will go into my exhibit proposal.

 

Here’s what it looks like with a folder opened – upon touching a file, an animation would show the file opening and spilling the contents on the workspace to end up kind of like this:

 

This is a very exciting project to work on, and I’m glad to get to use and hone my skills in creativity, organization, and attention to detail.  This exhibit proposal will certainly need a lot of all 3 of those things.  It’s also very interesting to interview people- I find my preconceptions dashed often, which is very refreshing.  And it’s great to be able to tailor the exhibit to several different audiences, in hopes that the message will be well received by all, no matter where they currently stand in relation to the issue of climate change/ global warming.  Talking with folks helps me know for sure what kind of material each group wants, so I can maximize the success of the exhibit with that group.  I can’t wait to see this thing in the flesh – I have already decided I will have to take a vacation out here next summer just to check it out!

We started the day with a couple of near-disasters but managed to make some good progress despite. We lost control of a hose while filling the tsunami wave tank and doused one of the controlling computers. Luckily, it was off at the time, but it also shouldn’t have had its case open, and we also should have been more aware of the hose! Ah, live and learn. No visitors were harmed, either.

It did help us identify that our internet is not quite up-to-snuff for the camera system; we’re supposed to have four GB ethernet connections but right now only have one. We went to review the footage to see what happened with the tanks, but the camera that had the right angle completely blanked out during just the time of the accident! Several of the other cameras are losing connection with the server intermittently as well. We’re not at the point of collecting real data, though, so again it’s just part of the learning process.

We also got more cameras installed, so we’re up to almost 30 in operation now. Not all are in their final place, but we’re getting more and more closer and closer as we live with them for a while and see how people interact. We also got the iPad interface set up so we can look at the cameras remotely using the Milestone XProtect app:

 

This will allow us to access the video footage from almost anywhere. It runs amazingly smoothly even on OSU’s finicky wireless network, and even seems to have slightly better image quality than the monitors (or maybe just better than my old laptop).

It’s a pretty powerful app, too, allowing us to choose the time we want to jump to, show picture in picture of the live feed, speed up or slow down playback, and capture snapshots we can email or save to the iPad Photo Library. Laura will install the full remote-viewing software on her laptop, too, to test that part of the operation out. That’s the one downside so far; most of our lab runs on Macs, while the Milestone system and the eyetracker are both on PCs, so we’ll have to buy a couple more laptops. Where’s that credit card?

 

So last week I posted about the evaluation project underway at Portland Art Museum (PAM) and wanted to give a few more details about how we are using the looxcie cameras.

 

Looxcies are basically bluetooth headsets, just like the ones regularly seen used with cell phones, but with a built in camera. I am currently using them as part of my research emcompassing docent-visitor interactions, and decided to use them as a data collection tool because of their ability to generate a good quality “visitor-eye-view” of the museum experience. I personally feel their potential as a research/evaluation tool in informal settings are endless, and had some wonderful conversations with other education professionals at the National Marine Educators Association conference in Anchorage, AK recently about where some other possibilities could lie – including as part of professional development practice for educators and exhibit development.

At PAM, the looxices will be used to capture that view when visitors interact with exhibition pieces, specifically those related to their Museum Stories and Conversation About Art video-based programs. Here, fitting visitors with looxcies will enable us to capture the interactions and conversations visitors have about the art on display as they visit the museum. The video data gained here can then be analyzed for repeating themes around what and how visitors talk about art in the museum setting.

During our meeting with Jess Park and Ally Schultz at PAM, we created some test footage to help with training other museum staff for the evaluation procedures. In the clip below, Jess and Ally are looking at and discussing some sculpture pieces, and were both wearing looxcies to give them a sense of how they feel to the user. This particular clip is from Ally’s perspective, and you’ll notice even Shawn and I have a go a butting in and talking about art with them!

What’s exciting about working with the looxcies, and with video observations in general, is how much detail you can capture about the visitor experience, down to what they are specifically looking at, how long they look at it, and even if they nod their head in agreement with the person they are conversing with. Multimodal discourse eat your heart out!