Sure, you want to know the outcomes resulting from your program. Sure, you want to know if your program is effective. Perhaps, you will even attempt to answer the question, “So What?” when you program is effective on some previously identified outcome. All that is important.
My topic today is something that is often over looked when developing an evaluation–the participant and program characteristics.
Knowing these characteristics may seem unimportant at the outset of the implementation. As you get to the end, questions will arise–How many females? How many Asians? How many over 60?
Demographers typically ask demographic questions as part of the data collection.
Those questions often include the following categories:
- Marital status
- Household income
- Educational level
Some of those may not be relevant to your program and you may want to include other general characteristic questions instead. For example, in a long term evaluation of a forestry program where the target audience was individuals with wood lots, asking how many acres were owned was important and marital status did not seem relevant.
Sometimes asking some questions may seem intrusive–for example, household income or age. In all demographic cases, giving the participant an option to not respond is appropriate. When these data are reported, report the number of participants who chose not to respond.
When characterizing your program, it is sometimes important to know characteristics of the geographic area where the program is being implemented–rural, suburban, urban, ? This is especially true when the program is a multisite program. Local introduces an unanticipated variable that is often not recognized or remembered.
Any variation in the implementation–number of contact hours, for example, or the number of training modules. The type of intervention is important as well–was the program delivered as a group intervention or individually. The time of the year that the program is implemented may also be important to document. The time of the year may inadvertently introduce a history bias into the study–what is happening in September is different than what is happening in December.
Documenting these characteristics and then defining them when reporting the findings helps to understand the circumstances surrounding the program implementation. If the target audience is large, documenting these characteristics can provide comparison groups–did males do something differently than females? Did participants over 50 do something different than participants 49 or under?
Keep in mind when collecting participant and program characteristic data, that these data help you and the audience to whom you disseminate the findings understand your outcomes and the effect of your program.