We spent this morning doing renovations on the NOAA tank. We deep cleaned, rearranged rocks and inserted a crab pot to prepare for the introduction of some tagged Dungeness crabs. NOAA used to be a deep-water display tank with sablefish and other offshore benthic and epibenthic species, but it has lost some of its thematic cohesion recently. Live animal exhibits bring unique interpretive complications.

All in-tank elements must meet the needs and observable preferences of the animals. This is an area where we cannot compromise, so preparations can take more time and effort than one might expect. For example, our display crab pot had to be sealed to prevent corrosion of the chicken wire. This would not be an issue in the open ocean, but we have to consider the potential effects of the metal on the invertebrates in our system.

Likewise, animals that may share an ecosystem in the ocean might seem like natural tankmates, but often they are not. One species may prey on the other, or the size and design of the tank may bring the animals into conflict. For example, we have a kelp greenling in our Bird’s Eye tank who “owns” the lower 36 inches of the tank. If the tank were not deep enough, she would not be able to comfortably coexist with other fish.

We’re returning the NOAA tank to a deep-water theme based on species and some simple design elements. An illusion of depth can be accomplished by hiding the water’s surface and using minimal lighting. The Japanese spider crab exhibit next door at Oregon Coast Aquarium also makes good use of these principles. When this is done right, visitors can get an intuitive sense of the animals’ natural depth range—regardless of the actual depth of the tank—before they even read the interpretive text.

We’re also using a new resident to help us clean up. The resident in question is a Velcro star (Stylasterias spp.) that was donated a couple of months back. It is only about eight inches across, but the species can grow quite large. Velcro stars are extremely aggressive, and will even attack snails and the fearsome sunflower stars (Pycnopodia helianthoides) that visitors know from our octopus tank. Our Velcro star will, we hope, cull the population of tiny marine snails that have taken over the NOAA tank’s front window in recent months.

Colleen has been very proactive in taking on major exhibit projects like this, and she has recruited a small army of husbandry volunteers—to whom I’ll refer hereafter as Newberg’s Fusiliers—to see them through. Big things are happening on all fronts, and with uncommon speed.

Here’s a roundup of some of our technology testing and progress lately.

First, reflections from our partners Dr. Jim Kisiel and Tamara Galvan at California State University, Long Beach. Tamara recently tested the iPad and QuestionPro/Survey Pocket, Looxcie cameras and a few other apps to conduct surveys in the Long Beach Aquarium, which doesn’t have wifi in the exhibit areas. Here is Jim’s report on their usefulness:

“[We] found the iPad to be very useful.  Tamara used it as a way to track, simply drawing on a pdf and indicating times and patterns, using the app Notability.  We simply imported a pdf of the floorplan, and then duplicated it each time for each track.  Noting much more than times, however, might prove difficult, due to the precision of a stylus.  One thing that would make this even better would be having a clock right on the screen.  Notability does allow for recording, and a timer that goes into play when the recording is started.  This actually might be a nice complement, as it does allow for data collector notes during the session. Tamara was unable to use this feature, though, due to the fact that the iPad could only run one recording device at a time–and she had the looxcie hooked up during all of this. 

Regarding the looxcie.  Tamara had mixed results with this.  While it was handy to record remotely, she found that there were many signal drop-outs where the mic lost contact with the iPad.  We aren’t sure whether this was a limitation of the bluetooth and distance, or whether there was just too much interference in the exhibit halls.  While looxcie would have been ideal for turning on/off the device, the tendency to drop communication between devices sometimes made it difficult to activate the looxcie to turn on.  As such, she often just turned on the looxcie at the start of the encounter.  It is also worth noting that Tamara used the looxcie as an audio device only, and sound quality was fine.
 
Tamara had mixed experiences with Survey Pocket.  Aside from some of the formatting limitations, we weren’t sure how effective it was for open-ended questions.  I was hoping that there was a program that would allow for an audio recording of such responses.  She did manage to create a list of key words that she checked off during the open-ended questions, in addition to jotting down what the interviewee said.  This seemed to work OK.  She also had some issues syncing her data–at one point, it looked like much of her data had been lost, due in part to … [problems transferring] her data from the iPad/cloud back to her computer.  However, staff was helpful and eventually recovered the data.
 
Other things:  The iPad holder (Handstand) was very handy and people seemed OK with using it to complete a few demographic questions. Having the tracking info on the pad made it easier to juggle papers, although she still needed to bring her IRB consent forms with her for distribution. In the future, I think we’ll look to incorporate the IRB into the survey in some way.”
Interestingly, I just discovered that a new version of SurveyPocket *does* allow audio input for open-ended questions. However, OSU has recently purchased university-wide licenses from a different survey company, Qualtrics, who as yet do not have an offline app mode for tablet-based data collection. It seems to be in development, though, so we may change our minds about the company we go with when the QuestionPro/SurveyPocket license is up for renewal next year. It’s amazing how the amount of research I did on these apps last year is almost already out of date.
Along the same lines of software updates kinda messing up your well-laid plans, we’re purchasing a couple of laptops to do more data analysis away from the video camera system desktop computer and away from the eyetracker. We suddenly were confronted with the Windows 8 vs Windows 7 dilemma, though – the software for both of these systems is Windows 7-based, but now that Windows 8 is out, the school had to make a call as to whether or not to upgrade. Luckily for us, we’re skipping Windows 8 for the moment, which enables us to actually use the software on the new laptops since we will still go with Windows 7 for them, and the software programs themselves for the cameras and eye tracker won’t likely be Windows 8 ready until sometime in the new year.
Lastly, we’re still bulking up our capacity for data storage and sharing, as well as internet for video data collection. I have recently put in another new server to be dedicated to handle the sharing of data, with the older 2 servers as slaves and the cameras spread out between them. In addition, we put in a NAS storage system and five 3TB hard drives for storage. Mark assures me we’re getting to the point of having this “initial installation” of stuff finalized …

As the lab considers how to encourage STEM reflection around the tsunami tank, this recent post from Nina Simon at Museum 2.0 reminds us what a difference the choice of a single word can make in visitor reflection:

“While the lists look the same on the surface (and bear in mind that the one on the left has been on display for 3 weeks longer than the one on the right), the content is subtly different. Both these lists are interesting, but the “we” list invites spectators into the experience a bit more than the “I” list.”

So as we go forward, the choice not only of the physical booth set up (i.e. allowing privacy or open to spectators), but also the specific wording can influence how our visitors choose to focus or not on the task we’re trying to investigate, and how broad or specific/personal their reflections might be. Hopefully, we’ll be able to do some testing of several supposedly equivalent prompts as Simon suggests in an earlier post as well as more “traditional” iterative prototyping.

And I don’t just mean Thanksgiving! Lately, I’ve run across an exhibit, a discussion, and now an article on things wearing down and breaking, so I figured that meant it was time for a blog post.

It started with my visit to the Exploratorium, who find that stuff breaks, sometimes unexpectedly. Master tinkerers and builders that they are, they made it into an exhibit of worn, bent or flat-out broken parts of their exhibits. It may take hundreds or even hundreds of thousands of uses, but when your visitorship is near a million per year, it doesn’t take that many days to find micro-changes suddenly visible as macro changes.

 

Then Laura suggested that we keep track of all the equipment we’ve been buying in case of, you guessed it, breaking (or other loss). So we’ve started an inventory that not only will serve as a nice record for the project of all the bits and bobs we’ve had to buy (so far, over 300 feet of speaker wire for just 10 cameras), but also will help us replace them more easily should something go wrong. Which we know it will, eventually, and frankly, we’ll have a sense of how quickly it goes wrong if we keep our records well. In our water-laden touch pools and wave tanks environment, this very likely will be sooner than we hope.

Finally, John Baek’s Open and Online Lifelong Learning newspaper linked to this story from Wired magazine about the people who are deliberately trying to break things, to make the unexpected expected.

So, have a great Thanksgiving break (in the U.S.), and try not to break anything in the process.

A new partnership with the Philomath (pronounced fill-OH-muth for you out-of-town readers) High Robotics Engineering Division (PHRED) helped the HMSC Free-Choice learning lab overcome a major information design hurdle. An on-going challenge for our observation system is recording usage of small, non-electronic moveable exhibit components – think bones, shells, levers, and spinning wheels.

PHRED mentors Tom Health and Tom Thompson will work with students to develop tiny, wireless microprocessor sensors that can be attached to any physical moving exhibit component and report its use to our database. The team will be using the popular Arduino development tool that has been the technological heart of the Maker movement.

This is a great partnership – the PHRED team has all the skills, enthusiasm, and creativity to tackle the project and build successful tools – not to mention gaining the notoriety that comes from working on an NSF-funded project. Oregon Sea Grant gains more experience integrating after school science clubs into funded research projects, while meeting the ever-challenging objective of engaging underserved communities.
Thanks to Mark for this update. 

Despite our fancy technology, there are some pieces of data we have to gather the old-fashioned way: by asking visitors. One piece we’d like to know is why visitors chose to visit on this particular occasion. We’re building off of John Falk’s museum visitor motivation and identity work, which began with a survey that asks visitors to rate a series of statements on Likert (1-5) scales as to how applicable they are for them that day, and reveals a rather small set of motives driving the majority of visits. We also have used this framework in a study of three of our local informal science education venues, finding that an abbreviated version works equally well to determine which (if any) of these motivations drives visitors. The latest version, tried at the Indianapolis Museum of Art, uses photos along with the abbreviated number of statements for the visitors to identify their visit motivations.

We’re implementing a version on an iPad kiosk in the VC for a couple of reasons: first, we genuinely want to know why folks are visiting, and want to be able to correlate identity motivations with the automated behavior, timing, and tracking data we collect from the cameras. Second, we hope people will stop long enough for us to get a good reference photo for the facial recognition system. Sneaky, perhaps, but it’s not the only place we’re trying to position cameras for good reference shots. And if all goes well with our signage, visitors will be more aware than ever that we’re doing research, and that it is ultimately aimed at improving their experience. Hopefully that awareness will allay most of the final fears about the embedded research tools that we are hoping will be minimal to start with.