Scriven says that the evaluative use of “bias” means the “…same as ‘prejudice’ ” with its antonyms being objectivity, fairness, impartiality. Bias causes systematic errors that are likely to affect humans and are often due to the tendency to prejudge issues because of previous experience or perspective.
Why is bias a problem for evaluators and evaluations?
- It leads to invalidity.
- It results in lack of reliability.
- It reduces credibility.
- It leads to spurious outcomes.
What types of bias are there that can affect evaluations?
- Shared bias
- Design bias
- Selectivity bias
- Item bias
Shared bias: Agreement among/between experts may be due to common error; often seen as conflict of interest; also seen in individual relationships. For example, an external expert asked to provide content validation for a nutrition education program which was developed by the evaluator’s sister-in-law. The likelihood that they share the same opinion of the program is high.
Design bias: Designing an evaluation to favor (or disfavor) a certain target group in order to support the program being evaluated. For example, selecting a sample of students enrolled in school on a day when absenteeism is high will result in a design bias against lower socio-economic students because absenteeism is usually higher among lower economic groups.
Selectivity bias: When a sample is inadvertently connected to desired outcomes, the evaluation will be affected. Similar to the design bias mentioned above.
Item bias: The construction of an individual item on an evaluation scale which adversely affects some subset of the target audience. For example, a Southwest Native American child who has never seen a brook is presented with an item asking the child to identify a synonym for brook out of a list of words. This raises the question about the objectivity of the scale as a whole.
Other types of bias that evaluators will experience include desired response bias and response shift bias
Desired response bias occurs when the participant provides an answer that s/he thinks the evaluator wants to hear. The responses the evaluator solicits are slanted towards what the participant thinks the evaluator wants to know. It is often found with general positive bias–that is, the tendency to report positive findings when the program doesn’t merit those findings. General positive bias is often seen with grade inflation–an average student is awarded a B when the student has actually earned a C grade.
Response shift bias occurs when the participant changes his/her frame of reference from the beginning of the program to the end of the program and then reports an experience that is less than the experience perceived at the beginning of the program. This results in lower program effectiveness.