OFFICIAL SESSION NAME: (6pm)
“Mining Student Interactions for Evidence of Collaborative Problem Solving.”
Ellen Strain-Seymour, Bob Dolan. Pearson Assessments
SUMMARY:
A delightful dissection of some serious statistics gathering – followed by a fun audience participation in a communication experiment.
ECAMPUS TAKEAWAY:
There were several ponderous examples of how-to-do feedback statistics gathering. There was a fun example of distance learning engagement in action. It is also interesting to ponder the specific terms and concepts they thought were worth tracking during the student experience.
RAW NOTES:
…Mentions teachers bringing Gamestar Mechanic into classroom. And kids at home using PBS kids [I sense a reoccuring trend of: educational learning working out just fine – OUTSIDE of the classroom].
…
“Need to expand psychometric methods for mining multi-choice questions” [ok, now i’m paying attention. it’s like psychophysiology and biometrics had a baby?]
Shows example of capturing “the collaboration” (moment) for analysis… They tried to avoid giving “avid gamers” any sort of advantage during the experience… The experience they created/studied involves two people (remotely) talking through some puzzle solving. They did some pilot studies with students separated across the country. Recorded both their utterances and click stream (through video capture. Processed later by grunts).
The first students/subjects took a long time before they realized they weren’t actually seeing same thing. Then they found one had “descriptions of actions”, and the other had “codes for triggering actions”. Soon they also found they were looking at wildly different screens (one is on a boat, while the other is underground, next to a safe). For analysis they are looking for “markers of collaboration” [is this a standard term? seems like a lot of assessment houses are just making up their own crazy terms… yeah? no?]
“Construct irrelevant variance” is term used to describe his sister “not knowing basic game conventions.”
PISA (Programme for International Student Assessment) is an international test. Which is looking to extend itself to cover collaboration. (someone mentions that UCLA has the CRESST framework).
…
They lay out their framework of vague words of importance… before settling on coding protocol. things like: Type of behavior, time occurred, descriptions, student who produced, evidentiary alignment (based on the CRESST framework).
Also kept track of (with separate codes for each) : dialogue, information sharing, and directions given. (because they were studying collaboration)
…Lots of discussion of details ensues… Students (middle school) usually didn’t introduce themselves. … Some dude suggests using a palette of word buttons to communicate, which would help with data parsing, but they mention a design urge to make the experience feel like txting (because they’re studying modern kids’ communication). … They considered using Skype, but realized the kids would likely cheat (share their screens). Noted all of the symbols were chosen to be sure they could be described visually.
Going through results metrics… they point out tons of communication taking place, as both a proportion of all events, and proportion of collaborative events.
Looking at this from PISA perspective, they point out: exploring and understanding, representing and formulating, planning and educating, monitoring and reflecting. [I’m just noting these as interesting things to keep track of]
…
He admits a huge granularity problem here, because they don’t consider the physical events and specifics of timing. (when she says something, and he clicks: His click could be considered a form of collaboration – but their system doesn’t code for that).
…
A dude in audience mentions psychological factors (like Stress management, emotional IQ, etc.) which he felt should be tracked.
Now they look forward to developing new frameworks to move this forward. They’re working with some old guy in the audience, who’s name is “Art” (who?)…
COME BACK LATER FOR:
Interesting terms here, and interesting choices for their coding focus (what things they specifically thought were good to suss out).