After a day-long cascade of productivity, Laura completed her Ph.D. proposal this afternoon. I called her and Shawn aside for a photo (above) to celebrate the milestone. Way to go, Laura!

It’s time to do annual reporting for Oregon Sea Grant. This gives us an opportunity to hold up our impacts and say “we did this.” In other ways, it’s about as much fun as it sounds.

Speaking of fun, here’s a really quick and easy  science activity from Make: Projects. With a mason jar, some rubbing alcohol, a flashlight, a disc of dry ice and a towel, you can make a cloud chamber to observe cosmic rays in real time. You can also make a bigger, fancier one with a basketball case. Then you let the universe do its thing, for the most part. You can trust the universe.  It’s been facilitating free-choice learning activities for a while.

 

Marine Science Day was a huge hit.  Attendance far exceeded any event I’ve personally witnessed at HMSC.  Researchers and educators did a fantastic job of communicating what goes on at our strange and wonderful workplace.

A few highlights:

-Bill’s public sea turtle necropsy (with power tools!)

-A six-person life raft doubling as a bouncy castle in the Barry Fisher building

-Kids trying on a microphone-equipped full-face SCUBA mask at the Oregon Coast Aquarium‘s dive program kiosk

On a somewhat-unrelated note, if you missed last Monday’s xkcd graphic, you should probably check it out.

 

 

We were all happy to see that our April Fool’s video had received more than 5,000 hits on YouTube as of this morning.  Thanks for all the great comments, and thanks to PZ Myers for giving us a nod.  I was amazed by the relative scarcity of comments from those whom Dave Barry calls the “humor impaired.”  As of this posting, one insightful commenter has managed to identify the video as “totally fake,” perhaps aided by the title card at the end stating as much.  We would’ve gotten away with it too, if it weren’t for you meddling kids!

Shawn, Mark, Laura and I met this morning to discuss camera placement.  The Visitor Center D&D campaign map came into play, with pennies representing camera mounts.  Now that we’ve figured out the field of view and other pertinent characteristics for our cameras, it’s a matter of fine-tuning our coverage and figuring out which camera works best in which location.

Associating video and audio is another issue.  One approach would be to automatically associate the audio feed from each microphone directly with the camera(s) that cover(s) the same cell(s) in our Visitor Center grid.  Another would be to present each audio and video feed separately, allowing researchers to easily review any audio feed in conjunction with any video feed.  What qualifies as “intuitive” can be highly variable.

Hypothetically, my initial response to a fresh mound of audio/video data would be to visually scan audio tracks for activity, then flip through the videos to see what was going on at those times.  In any case, our software should be versatile enough to accommodate a range of approaches.

 

Alan Alda and the Center for Communicating Science have a challenge for scientists: explain a flame to an 11-year-old.  Brilliant.  You can read more about this (and submit your entry) here.

“As a curious 11-year-old, Alan Alda asked his teacher, “What is a flame?” She replied: “It’s oxidation.” Alda went on to win fame as an actor and writer, became an advocate for clear communication of science, and helped found the Center for Communicating Science at Stony Brook University. He never stopped being curious, and he never forgot how disappointing that non-answer answer was.”

Alda’s guest editorial for Science, wherein he issued his challenge, is also well worth reading.  This can also be found at the Flame Challenge site.

Do it for yourself.  Do it for the kids.  Do it for Hawkeye.

 

Michelle will be posting this week from the Exploratorium.  She’s currently working with NOAA scientists and some of our iPad apps.   Stay tuned.

In the meantime, here’s something to keep you occupied.  An AI called “Angelina,” developed as part of Michael Cook‘s Ph.D. project at Imperial College, generates (almost) entire games procedurally.  From the New Scientist piece:

“Angelina can’t yet build an entire game by itself as Cook must add in the graphics and sound effects, but even so the games can easily match the quality of some Facebook or smartphone games, with little human input. ‘In theory there is nothing to stop an artist sitting down with Angelina, creating a game every 12 hours and feeding that into the Apple App Store,’ says Cook.”

The capacity of games to teach is a research interest of mine, and I think the most interesting thing about Angelina is its ability to run through its own creations to determine (presumably using human-defined parameters) how engaging they are.  It shows in the New Scientist-commissioned “Space Station Invaders” demo game, which is a retro platformer with some nice simple jumping challenges.  The player character’s immortality is a welcome inclusion, as the aggressive procedurally-generated enemy behaviors give new meaning to that classic gamer complaint: “The computer cheats.”

 

 

Harrison used an interesting choice of phrase in his last post: “time-tested.” I was just thinking as I watched the video they produced, including Bill’s dissection, that I don’t know what we’ve done to rigorously evaluate our live programming at Hatfield. But it is just this sort of “time-tested” program that our research initiatives are truly trying to sort out and put to the test. Time has proven its popularity, data is necessary to prove its worth as a learning tool. A very quick survey of the research literature doesn’t turn up much, though some science theater programming was the subject of older studies. Live tours are another related program that could be ripe for investigation.

We all know, as humans who recognize emotions in others, how much visitors enjoy these sorts of programs and science shows of all types. However, we don’t always apply standards to our observations, such as measuring specific variables to answer specific questions. We have a general sense of “positive affect” in our visitors, but we don’t have any data in the form of examples of quotes or interviews with visitors to back up our thoughts. Yet.

A good example of another need for this was in a recent dissertation defense here at OSU. Nancy Staus’ research looked at learning from a live program, and she interviewed visitors after watching a program at a science center. She found, however, that the presenter of the program had a lot of influence on the learning simply by the way they presented the program: visitors recalled more topics and more facts about each topic when the presentation was more interactive than scripted. She wasn’t initially interested in differences of this sort, but because she’d collected this sort of data on the presentations, she was able to locate a probable cause for a discrepancy she noted. So while this wasn’t the focus of her research (she was actually interested in the role of emotion in mediating learning), it pointed to the need for data to not only back up claims, but also to lead to explanations for surprising results and open areas for further study.

That’s what we’re working for: that rigorously examining these and all sorts of other learning opportunities becomes an integral part of the “time-honored tradition.”