We recently held Professional Development Days for the Division of Outreach and Engagement. This is an annual opportunity for faculty and staff in the Division to build capacity in a variety of topics. The question this training posed was evaluative:
How do we provide meaningful feedback?
Evaluating a conference or a multi-day, multi-session training is no easy task. Gathering meaningful data is a challenge. What can you do? Before you hold the conference (I’m using the word conference to mean any multi-day, multi-session training), decide on the following:
- Are you going to evaluate the conference?
- What is the focus of the evaluation?
- How are you going to use the results?
The answer to the first question is easy: YES. If the conference is an annual event (or a regular event), you will want to have participants’ feedback of their experience, so, yes, you will evaluate the conference. Look at a Penn State Tip Sheet 16 for some suggestions. (If this is a one time event, you may not; though as an evaluator, I wouldn’t recommend ignoring evaluation.)
The second question is more critical. I’ve mentioned in previous blogs the need to prioritize your evaluation. Evaluating a conference can be all consuming and result in useless data UNLESS the evaluation is FOCUSED. Sit down with the planners and ask them what they expect to happen as a result of the conference. Ask them if there is one particular aspect of the conference that is new this year. Ask them if feedback in previous years has given them any ideas about what is important to evaluate this year.
This year, the planners wanted to provide specific feedback to the instructors. The instructors had asked for feedback in previous years. This is problematic if planning evaluative activities for individual sessions is not done before the conference. Nancy Ellen Kiernan, a colleague at Penn State, suggests a qualitative approach called a Listening Post. This approach will elicit feedback from participants at the time of the conference. This method involves volunteers who attended the sessions and may take more persons than a survey. To use the Listening Post, you must plan ahead of time to gather these data. Otherwise, you will need to do a survey after the conference is over and this raises other problems.
The third question is also very important. If the results are just given to the supervisor, the likelihood of them being used by individuals for session improvement or by organizers for overall change is slim. Making the data usable for instructors means summarizing the data in a meaningful way, often visually. There are several way to visually present survey data including graphs, tables, or charts. More on that another time. Words often get lost, especially if words dominate the report.
There is a lot of information in the training and development literature that might also be helpful. Kirkpatrick has done a lot of work in this area. I’ve mentioned their work in previous blogs.
There is no one best way to gather feedback from conference participants. My advice: KISS–keep it simple and straightforward.