“How far you go in life depends on your being tender with the young, compassionate with the aged, sympathetic with the striving, and tolerant of the weak and the strong — because someday you will have been all of these.”

~~George Washington Carver~~


There is power in this comment by Carver.  Are you thinking what does this have to  do with evaluation? Considering diversity when one conducts an evaluation is critical.  The AEA has built that into its foundation in its guiding principles as “Respect for people.”  It is clearly defined in the AEA by-laws.  It is addressed in AEA’s Statement on Cultural Competence.  One of the Program Evaluation Standards (Propriety) addresses Human Rights and Respect (P3).

Yet diversity goes beyond these the topics covered in these documents.

Daryl G. Smith, Professor at Claremont Graduate University, has developed an informed framework providing a practical and valuable catalyst for considering diversity in terms of the context of individual institutions.  I think it has implications for evaluation whether you are at a university or a not-for-profit.  I believe it has relevance especially for those of us who work in Extension.

Her model looks like this:

This model was found in the document titled, “Campus Diversity Initiative: Current Status, Anticipating the Future“.(In the fine print is the book from which it is taken and if you want to read the book, copy and paste to your search engine.)


I’ve used this model a lot for helping me see diversity in ways other than gender and race/ethnicity, the usual way diversity is identified in university.  For example, urban vs. rural; new to something vs. been at that something for a while; engaged vs. outreached; doing as vs. doing to.  There are a wealth of evaluation questions that can be generated when diversity is reconsidered.

Some examples are:

1. How accessible is the program to county officials?

2. What other measures of success could have been used?

3. How have the local economic conditions affected vitality?  Would those
conditions affect viability as well?

4. What  characteristics were missed by not collecting educational level?

5. How could scholarship be redefined to be relevant to this program?

6. How welcoming and inclusive is this program?

7. How does background and county origin affect participation?

8. What difference does appointed as opposed to elected status make?

9.  How accessible is the program to faculty across the Western Region?

10.  What  measures of success could be used?

11.  How have the local economic conditions affected vitality?  Would those
conditions affect viability as well?  (A question not specifically addressed.)

12.  How welcoming and inclusive is this program?

13.  How does background and program area affect participation?

Keep in mind that these questions were program specific and are not the specific agenda for program effectiveness.  My question is: Should they have been?  Probably.  At least they needed to be considered in the planning stages.



The topic of survey development seems to be  popping up everywhere–AEA365, Kirkpatrick Partners, eXtension Evaluation Community of Practice, among others.  Because survey development is so important to Extension faculty, I’m providing links and summaries.


 AEA365 says:

“… it is critical that you pre-test it with a small sample first.”  Real time testing helps eliminate confusion, improve clarity, and assures that you are asking a question that will give you an answer to what you want to know.  This is so important today when many surveys are electronic.

It is also important to “Train your data collection staff…Data collection staff are the front line in the research process.”  Since they are the people who will be collecting the data, they need to understand the protocols, the rationales, and the purposes of the survey.

Kirkpatrick Partners say:

“Survey questions are frequently impossible to answer accurately because they actually ask more than one question. ”  This is the biggest problem in constructing survey questions.  They provide some examples of asking more than one question.


Michael W. Duttweiler, Assistant Director for Program Development and Accountability at Cornell Cooperative Extension stresses the four phases of survey construction:

  1. Developing a Precise Evaluation Purpose Statement and Evaluation Questions
  2. Identifying and Refining Survey Questions
  3. Applying Golden Rules for Instrument Design
  4. Testing, Monitoring and Revising

He then indicates that the next three blog posts will cover point 2, 3, and 4.

Probably my favorite post on survey recently was one that Jane Davidson did back in August, 2012 in talking about survey response scales.  Her “boxers or briefs” example captures so many issues related to survey development.

Writing survey questions which give you useable data that answers your questions about your program is a challenge; it is not impossible.  Dillman writes the book about surveys; it should be on your desk.

Here is the Dillman citation:
Dillman, D. A., Smyth, J. D., & Christian, L. M. (2009).  Internet, mail, and mixed-mode surveys: The tailored design method.  Hoboken, NJ: John Wiley & Sons, Inc.