I want to talk about learning. Real learning. This week I am borrowing a blog from another writer intact. I have never done this. True, I have taken parts of blogs and quoted them. This blog post from the blog called “adapting to perpetual beta” by Harold Jarche is applied here in its entirety because I think the topic is important. I have added the visuals except for the Rodin, which was in the original post.

Yes, it relates to evaluation. We learn (those who value evaluation) throughout our careers. The various forms of learning are engaged (see: Edgar Dale who designed the learning cone though not with percentages that are usually attributed to the styles).Cone of learning(This particular version was developed by Bruce Hyland based on Dale’s work.) When you read the post below, think about how you learn. Engages? Reflective?

real learning is not abstract

Posted 2016-06-20

Are we entering an era that heralds ‘The End of Reflection’, as this NY Times article suggests?

Continue reading

About two years ago, I conducted a 17 month hybrid evaluation preparation program for the Western Region Extension Service faculty. There were over 30 individuals involved. I was the evaluation expert; Jim LindstromJames-Lindstrom (who was at WSU at the time) was the cheerleader, the encourager, the professional development person. I really couldn’t have done it without him. (Thank you, Jim.) Now, to maximize this program and make it available to others who were not able to participate, I’ve been asked to explore an option for creating an on-line version of the WECT (say west) program. It would be loaded through the OSU professional and continuing education (PACE) venue. To that end, I am calling on those of you who participated in the original program (and any other readers) to provide me with feedback of the following:

  1. What was useful?
  2. What needed to be added?
  3. What could be more in depth?
  4. What could be deleted?
  5. Other comments?

Please be as specific as possible.

I can go to the competency literature (of which there is a lot) and redevelop WECT from those guidelines.  (For more information on competencies see: King, J. A., Stevahn, L., Ghere, G., & Minnema, J. (2001). Toward a taxonomy of essential evaluator competencies. American Journal of Evaluation, 22(2), 229-247.) Or I could use the Canadian system as a foundation. (For more information see this link.)

I doubt if I can develop an on-line version that would cover (or do justice) to all those competencies.

So I turn to you my readers. Let me know what you think.

my two cents.

molly.

“In reality, winning begins with accountability. You cannot sustain success without accountability. It is an absolute requirement!” (from walkthetalk.com.)

I’m quoting here.  I wish I had thought of this before I read it.  It is important in everyone’s life, and especially when evaluating.

 

Webster’s defines accountability as, “…“the quality or state of being accountable; an obligation (emphasis added) or willingness to accept responsibility for one’s actions.”  The business dictionary goes a little further and defines accountability as “…The obligation of an individual (or organization) (parentheses added) to account for its activities, accept responsibility for them, and to disclose the results in a transparent manner.”

It’s that last part to which evaluators need to pay special attention; the “disclose results in a transparent manner” part.  There is no one looking over your shoulder to make sure you do “the right thing”; that you read the appropriate document; that you report the findings you found not what you know the client wants to hear.  If you maintain accountability, you are successful; you will win.

AEA has a adopted a set of Guiding Principles Guiding principlesfor the organization and its members.  The principles are 1) Systematic inquiry; 2) Competence; 3) Integrity/Honesty; 4) Respect for people; and 5) Responsibilities for the General and Public Welfare.  I can see where accountability lies within each principle.  Can you?

AEA has also endorsed the Program Evaluation Standards  program evaluation standards of which there are five as well.  They are:  1) Utility, 2) Feasibility, 3) Proprietary, 4) Accuracy, and 5) Evaluation accountability.  Here, the developers were very specific and made accountability a specific category.  The Standard specifically states, “The evaluation accountability standards encourage adequate documentation of evaluations and a metaevaluative perspective focused on improvement and accountability for evaluation processes and products.”

You may be wondering about the impetus for this discussion of accountability (or, not…).  I have been reminded recently that only the individual can be accountable.  No outside person can do it for him or her.  If there is an assignment, it is the individual’s responsibility to complete the assignment in the time required.  If there is a task to be completed, it is the individual’s responsibility (and Webster’s would say obligation) to meet that responsibility.  It is the evaluator’s responsibility to report the results in a transparent manner–even if it is not what was expected or wanted.  As evaluator’s we are adults (yes, some evaluation is completed by youth; they are still accountable) and, therefore, responsible, obligated, accountable.  We are each one responsible–not the leader, the organizer, the boss.  Each of us.  Individually.  When you are in doubt about your responsibility, it is your RESPONSIBILITY to clarify that responsibility however works best for you.  (My rule to live by number 2:  Ask.  If you don’t ask, you won’t get; if you do, you might not get.)

Remember, only you are accountable for your behavior–No. One. Else.  Even in an evaluation.; especially in an evaluation

 

 

 

Last week, I spoke about how to questions  and applying them  to program planning, evaluation design, evaluation implementation, data gathering, data analysis, report writing, and dissemination.  I only covered the first four of those topics.  This week, I’ll give you my favorite resources for data analysis.

This list is more difficult to assemble.  This is typically where the knowledge links break down and interest is lost.  The thinking goes something like this.  I’ve conducted my program, I’ve implemented the evaluation, now what do I do?  I know my program is a good program so why do I need to do anything else?

YOU  need to understand your findings.  YOU need to be able to look at the data and be able to rigorously defend your program to stakeholders.  Stakeholders need to get the story of your success in short clear messages.  And YOU need to be able to use the findings in ways that will benefit your program in the long run.

Remember the list from last week?  The RESOURCES for EVALUATION list?  The one that says:

1.  Contact your evaluation specialist.

2.  Listen to stakeholders–that means including them in the planning.

3.  Read

Good.  This list still applies, especially the read part.  Here are the readings for data analysis.

First, it is important to know that there are two kinds of data–qualitative (words) and quantitative (numbers).  (As an aside, many folks think words that describe are quantitative data–they are still words even if you give them numbers for coding purposes, so treat them like words, not numbers).

  • Qualitative data analysis. When I needed to learn about what to do with qualitative data, I was given Miles and Huberman’s book.  (Sadly, both authors are deceased so there will not be a forthcoming revision of their 2nd edition, although the book is still available.)

Citation: Miles, M. B., & Huberman, A. Michael. (1994). Qualitative data analysis: An expanded source book. Thousand Oaks, CA: Sage Publications.

Fortunately, there are newer options, which may be as good.  I will confess, I haven’t read them cover to cover at this point (although they are on my to-be-read pile).

Citation:  Saldana, J.  (2009). The coding manual for qualitative researchers. Los Angeles, CA: Sage.

Bernard, H. R. & Ryan, G. W. (2010).  Analyzing qualitative data. Los Angeles, CA: Sage.

If you don’t feel like tackling one of these resources, Ellen Taylor-Powell has written a short piece  (12 pages in PDF format) on qualitative data analysis.

There are software programs for qualitative data analysis that may be helpful (Ethnograph, Nud*ist, others).  Most people I know prefer to code manually; even if you use a soft ware program, you will need to do a lot of coding manually first.

  • Quantitative data analysis. Quantitative data analysis is just as complicated as qualitative data analysis.  There are numerous statistical books which explain what analyses need to be conducted.  My current favorite is a book by Neil Salkind.

Citation: Salkind, N. J. (2004).  Statistics for people who (think they) hate statistics. (2nd ed. ). Thousand Oaks, CA: Sage Publications.

NOTE:  there is a 4th ed.  with a 2011 copyright available. He also has a version of this text that features Excel 2007.  I like Chapter 20 (The Ten Commandments of Data Collection) a lot.  He doesn’t talk about the methodology, he talks about logistics.  Considering the logistics of data collection is really important.

Also, you need to become familiar with a quantitative data analysis software program–like SPSS, SAS, or even Excel.  One copy goes a long way–you can share the cost and share the program–as long as only one person is using it at a time.  Excel is a program that comes with Microsoft Office.  Each of these has tutorials to help you.

A part of my position is to build evaluation capacity.  This has many facets–individual, team, institutional.

One way I’ve always seen as building capacity is knowing where to find the answer to the how to questions.  Those how to questions apply to program planning, evaluation design, evaluation implementation, data gathering, data analysis, report writing, and dissemination.  Today I want to give you resources to build your tool box.  These resources build capacity only if you use them.

RESOURCES for EVALUATION

1.  Contact your evaluation specialist.

2.  Listen to stakeholders–that means including them in the planning.

3.  Read.

If you don’t know what to read to give you information about a particular part of your evaluation, see resource Number 1 above.  For those of you who do not have the luxury of an evaluation specialist, I’m providing some reading resources below (some of which I’ve mentioned in previous blogs).

1.  For program planning (aka program development):  Ellen Taylor-Powell’s web site at the University of Wisconsin Extension.  Her web site is rich with information about program planning, program development, and logic models.

2.  For evaluation design and implementation:  Jody Fitzpatrick”s book.

Citation:  Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R. (2004). Program evaluation: Alternative approaches and practical guidelines.  (3rd ed.).  Boston: Pearson Education, Inc.

3.  For evaluation methods, that depends on the method you want to use for data gathering; it doesn’t cover the discussion of evaluation design, though.

  • For needs assessment, the books by Altschuld and Witkin (there are two).

(Yes, needs assessment is an evaluation activity).

Citation:  Witkin, B. R. & Altschuld, J. W. (1995).  Planning and conducting needs assessments: A practical guide. Thousand Oaks, CA:  Sage Publications.

Citation:  Altschuld, J. W. & Witkin B. R. (2000).  From needs assessment to action: Transforming needs into solution strategies. Thousand Oaks, CA:  Sage Publications, Inc.

  • For survey design:     Don Dillman’s book.

Citation:  Dillman, D. A., Smyth, J. D., & Christian, L. M. (2009).  Internet, mail, and mixed-mode surveys:  The tailored design method.  (3rd. ed.).  Hoboken, New Jersey: John Wiley & Son, Inc.

  • For focus groups:  Dick Krueger’s book.

Citation:  Krueger, R. A. & Casey, M. A. (2000).  Focus groups:  A practical guide for applied research. (3rd. ed.).  Thousand Oaks, CA: Sage Publications, Inc.

  • For case study:  Robert Yin’s classic OR

Bob Brinkerhoff’s book. 

Citation:  Yin, R. K. (2009). Case study research: Design and methods. (4th ed.). Thousand Oaks, CA: Sage, Inc.

Citation:  Brinkerhoff, R. O. (2003).  the success case method:  Find out quickly what’s working and what’s not. San Francisco:  Berrett-Koehler Publishers, Inc.

  • For multiple case studies:  Bob Stake’s book.

Citation:  Stake, R. E. (2006).  Multiple case study analysis. New York: The Guilford Press.

Since this post is about capacity building, a resource for evaluation capacity building:

Hallie Preskill and Darlene Russ-Eft’s book .

Citation:  Preskill, H. & Russ-Eft, D. (2005).  Building Evaluation Capacity: 72 Activities for teaching and training. Thousand Oaks, CA: Sage Publications.

I’ll cover reading resources for data analysis, report writing, and dissemination another time.