With Mark’s guidance over the phone, I spent a few hours today testing camera placement with a small Axis camera and its built-in microphone. One of my favorite security features of this camera is its built-in speaker, which can be used to make the camera shout “intruder,” whisper “pssst,” or bark like a dog.  None of these have any conceivable utility whatsoever for what we’re doing, but it’s always nice to know we have options.

So, I put it in the entryway.  I put it over and next to the octopus tank.  I put it over the front desk. I put it by the touch pool, which triggered a barrage of eyeball-seeking dust particles that had been guarding the overhead ethernet ports for untold eons.

Each vantage point tested presented a decent view and adequate lighting.  The model I used will not be installed in all positions, but it provides a great baseline.  We also received a new Axis dome camera with a microphone, which we can use up-close at individual exhibits.

To record a few audio tests, I directed the system output of one of our Macbooks into Audacity using Soundflower. Having recently spent several late nights playing with open-source audio software, I improvised this solution a bit more easily than I had anticipated. I never expected that my private dubstep habit would prove to be a reservoir of generalizable workplace skills, but it goes to show that free-choice learning happens all the time.

Mark and I did some scale-model wave tank testing this afternoon.  An initial test presented some hurdles (waves splashing over the far end of the tank, waves rebounding and creating mid-tank chaos, etc.).  Mark introduced a novel scale-model component (a scouring pad at the end of the tank) to disperse the wave energy and prevent the waves from bouncing back.

With this humble addition, the model tank performed admirably, providing practical reassurance that the proposed measurements for the final design will demonstrate the relevant concepts without soaking the floors.  Any handle, button, lever, knob or switch in an exhibit space must be built to accommodate a range of perceivable affordances.  If pulling the lever triggers an interesting result, pulling it ever harder and faster might produce even more interesting results.

This can sometimes put wear and tear on exhibit components, but it’s part of what makes hands-on exhibits fun for learners (and learning researchers, too).

 

Michelle will be posting this week from the Exploratorium.  She’s currently working with NOAA scientists and some of our iPad apps.   Stay tuned.

In the meantime, here’s something to keep you occupied.  An AI called “Angelina,” developed as part of Michael Cook‘s Ph.D. project at Imperial College, generates (almost) entire games procedurally.  From the New Scientist piece:

“Angelina can’t yet build an entire game by itself as Cook must add in the graphics and sound effects, but even so the games can easily match the quality of some Facebook or smartphone games, with little human input. ‘In theory there is nothing to stop an artist sitting down with Angelina, creating a game every 12 hours and feeding that into the Apple App Store,’ says Cook.”

The capacity of games to teach is a research interest of mine, and I think the most interesting thing about Angelina is its ability to run through its own creations to determine (presumably using human-defined parameters) how engaging they are.  It shows in the New Scientist-commissioned “Space Station Invaders” demo game, which is a retro platformer with some nice simple jumping challenges.  The player character’s immortality is a welcome inclusion, as the aggressive procedurally-generated enemy behaviors give new meaning to that classic gamer complaint: “The computer cheats.”

 

 

One of the key techniques in museum and free-choice learning evaluation and research is the idea of visitor or user observation by the staff. When we’re trying to observe them in a “natural” state, and figure out how they are really behaving, for example. We call this observation unobtrusive. The reality is we are rarely so discreet. How do you convince regular visitors that that staff member wearing a uniform and scribbling on a clipboard near enough to eavesdrop on you is not actually stalking them? You don’t, that is, until you turn to a lot of technological solutions as our new lab will be doing.

We’ve spent a lot of hours dreaming up the way this new system is going to work and trying to make the observations of speech, actions, and more esoteric things like interests and goals hidden to the visitor’s eye. Whether we succeed or not will be the subject of many new sets of evaluations with our three new exhibits and more. Laura’s looxcie-based research will be one of these.

Over the years we’ve gathered lots of data about what people tend to do when either they truly don’t know they’re being watched, or they don’t care. In addition, we’ve gathered some ideas of how visitors react to the idea of participating in our studies, from flat-out asking us what we’re trying to figure out, to just giving us the hairy eyeball and perhaps skipping the exhibit we’re working on. A lot of these turn into frustrations for the staff as we must throw out that subject, or start our counting for randomization over. So as we go through the design process, I’m going to share some of my observations of myself gathering visitor data through observations and surveys. These two collection tools are ones we hope to readily automate to the benefit of both the visitors who feel uncomfortable under obvious scrutiny and the researcher who suffers the pangs of rejection.

Mark and Katie identified a useful model for data collection using the face-recognition system. That model is Dungeons & Dragons. Visitors come with goals in mind, often in groups, and they take on a variety of roles within those groups. D&D and similar role-playing games provide a ready set of rules and procedures for modeling this kind of situation.

In practice, this means the Visitor Center exhibit space will be divided into a grid, with the system recording data based on proximity, direction and attributes of agents (visitors, volunteers and staff) and the grid squares themselves.

For example, the cabinet of whale sculptures inside the front door would occupy a row of “artifact” squares on the grid. Visitor interactions would be recorded accordingly. Interactions with the exhibit would update each visitor’s individual profile to reflect engagement and potential learning. To use only-slightly-more D&D terms, spending time at the whale exhibit would add modifiers to, say, the visitor’s “Biology” and “Ocean Literacy” attributes. The same goes for volunteers and staff members, taking into account their known familiarity with certain material.

Mark and Katie have drafted what is essentially a dungeon map of the VC, complete with actual D&D miniatures. Staff members will even have character sheets, of a sort, to represent their knowledge in specific areas—this being a factor in their interactions with visitors.

In a visitor group scenario Mark walked me through today, the part of the touch pool volunteer was played by what I believe was a cleric of some sort. Mark has happily taken to the role of Dungeon Master.

This all forms a basic framework, but the amount and specificity of data we can collect and analyze in this way is staggering. We can also refer back to the original video to check for specific social interactions and other factors the face-recognition software might have missed.

In other news, Ursula will be released Friday morning. Here’s the word from Jordan:

“Wednesday afternoon the HMSC Husbandry team decided that now is the best time to release Ursula back into the wild. Our octopus from British Columbia is as feisty as ever and we feel that she is in good health. Because of this, we will be preparing Ursula for her journey back to the ocean Friday, December 23, 2011. We invite you to attend starting at 8:30 a.m. in the Visitor Center. We will then proceed to release her off Yaquina’s South Jetty about 9 .a.m.”

The design process for our climate change gallery is now underway. In addition to presenting current science, we’re designing the gallery to address the values and cultural beliefs that inform the discourse on this topic. One of the main concepts we’ll be drawing on is the “Six Americas.”

We want the climate change gallery to be as participatory as possible, allowing visitors to provide feedback and personal reflections on the content. Most of our exhibits deal primarily or exclusively with knowledge. This gallery will focus on personal beliefs, and how these influence the ways people learn. It should be an interesting project.

In the spirit of Thanksgiving, here’s a little piece from the Science and Entertainment Exchange about the science of cooking a turkey. I’m thankful for it.

Have a happy Thanksgiving, everyone!