OFFICIAL SESSION NAME: (3:30pm)
“Reporting Real-Time Engagement and Learning Data Using Sensor Suites”- Robert Christopherson, Arizona State University / Sempre Learning
SUMMARY:
Thrilling overview of current state of biometrics/psychophysiology.
ECAMPUS TAKEAWAY:
Gathering massive amounts of data, and processing them in ways to divine trends (and deeper truths than you can get from simple surveys) is the future of evaluation. I’m deeply passionate about incorporating biometrics into the evaluation of our learning exercises.
RAW NOTES:
Robert is not a programmer, more a high level organizer. Starts off by asking if we see fear as much in learning as in gaming [huzzawha? did i miss the point here?].
Talks about education usually being a cycle between frustration and flow/engagement.
Claims he has found that in education: engagement is learning (self improvement) but in games: engagement is fun (losing track of time). Hoping to apply hooks form each side to the other. [I was reminded of this idea, reading a summary of a talk at MIGS about Redefining the role of Challenge in Games
…Mentions verbalization as a measurable.
Q: What does he mean by verbalization and behavior as “externalities”?
A: Talk aloud sessions, or recording things they say by accident.
Talks about what they go through to decide what to develop…
[SIDE NOTE: I need to make an iPhone app, call it “Retention” maybe, based on that old memory assistant program mentioned in wired a few years back (link?). It was designed to brings back items at points in time, testing your retention, and ultimately trying to find your sweet spot for being reminded about things. Maybe better to think of this as a Profiler? Personal Assistant?]
Emband24 shown, which is an EEG from Emsense. They will evaluate your software for you, and offer a robust report.
Points out neurosky is different, focused on concentration more (more common in single sensor headbands he’s seen).
Lady in audience mentions using ADM (link?).
He uses emotiv (EPOC, I assume). Talks about data coming from brainwaves more than EEG location. Not sure about sensor data specifics. (he links to more info)
He digs SenseWear for Galvanic Skin Response (GSR). (he links to more info)
(has accelerometers)
Wild divine is $280 for Hw, $5k for sdk. Skin conductivity and pulse. A guy from the company is here at the conference, and (over lunch, an hour earlier) he noted that most GSR devices use velco straps (this would be Corwin Bell with Wild Divine). But everyone has their own tension for straps, and this variation affects results. (…makes me wonder if this is part of why Valve saw such difference in GSR data, which Mike Ambinder mentioned in his GDC talk,
Biofeedback in Gameplay: How Valve Measures Physiology to Enhance Gameplay Experience)
Notes tobii eye tracking. notes importance of pupil tracking specifically. (more info) Same intense lady in the audience notes “Face labs” has something competing (she must mean Seeing Machines’s product.
he claims you need a time frame to calibrate. Like 15 minutes.
Q:What is diff between biometrics, psychophysiology etc? Are standards emerging? How will they?
Q: How do you deal with high percentage of emotive users who don’t see accurate measurements?
Q: How generate their results if not statistical analysis? Peaks and valleys … Gut feeling?
…
Talks about the problem of mapping eye tracking data back to what was on screen. Classically, people pay grad students to review both. Would prefer it analyzed automatically.
They are experimenting with 3d heat maps within game, to see what people were following. Scientopolis is example game. In unity3d. They have a unity plugin you can plop in. Hoping to make their data available to game developers and researchers in next couple months.
Talks about the way classic measures of Flow tend to disrupt it (by asking, observing).
Educational goals are all about measuring performance, retention, attitude towards learning, and staying on task.
Learning Sciences Research Lab – lsrl.lab.asu.edu
Questions:
Joy Martinez with University of Central Florida was the lady in audience. She talks about working with “cresst” program at another college (UCLA’s National Center for Research), more focused on Stressful environments. (How does GMU get away with their own Crest program? this confused my googling for some time)
He talks about another group at his university compressing 500 data points per second, down to 3.
Roselin Picard group also does GSR well.
(end)
I stayed and talked briefly. Got his card. I asked about terminology, and he said psychophysiology is broad, and usually doesn’t involve one thing (GSR?), while biometrics usually doesn’t involve EEG. (sigh).
I asked about just doing some crappy eye tracking with a simple web cam – you wouldn’t get staccatos at 30 or 60 FPS, but couldn’t it still be useful? He pointed out recent apple patent on gaze tracking to help with page turn (when you reach end of page in an eBook).
COME BACK LATER FOR:
… probably no need to check back on this one actually