What do I know that they don’t know?
What do they know that I don’t know?
What do all of us need to know that few of us knows?”

These three questions have buzzed around my head for a while in various formats.

When I attend a conference, I wonder.

When I conduct a program, I wonder, again.

When I explore something new, I am reminded that perhaps someone else has been here and wonder, yet again.

Thinking about these questions, I had these ideas

  • I see the first statement relating to capacity building;
  • The second statement  relating to engagement; and
  • The third statement (relating to statements one and two) relating to cultural competence.

After all, aren’t both of these statements (capacity building and engagement)  relating to a “foreign country” and a different culture?

How does all this relate to evaluation?  Read on…

Premise:  Evaluation is an everyday activity.  You evaluate everyday; all the time; you call it making decisions.  Every time you make a decision, you are building capacity in your ability to evaluate.  Sure, some of those decisions may need to be revised.  Sure, some of those decisions may just yield “negative” results.  Even so, you are building capacity.  AND you share that knowledge–with your children (if you have them), with your friends, with your colleagues, with the random shopper in the (grocery) store.  That is building capacity.  Building capacity can be systematic, organized, sequential.  Sometimes formal, scheduled, deliberate.  It is sharing “What do I know that they don’t know (in the hope that they too will know it and use it).

Premise:  Everyone knows something.  In knowing something, evaluation happens–because people made decisions about what is important and what is not.  To really engage (not just outreach which much of Extension does), one needs to “do as” the group that is being engaged.  To do anything else (“doing to” or “doing with”) is simply outreach and little or no knowledge is exchanged.  Doesn’t mean that knowledge isn’t distributed; Extension has been doing that for years.  Just means that the assumption (and you know what assumptions do) is that only the expert can distribute knowledge.  Who is to say that the group (target audience, participants) aren’t expert in at least part of what is being communicated.  Probably are.  It is the idea that … they know something that I don’t know (and I would benefit from knowing).

Premise:  Everything, everyone is connected.  Being prepared is the best way to learn something.  Being prepared by understanding culture (I’m not talking only about the intersection of race and gender; I’m talking about all the stereotypes you carry with you all the time) reinforces connections.  Learning about other cultures (something everyone can do) helps dis-spell stereotypes and mitigate stereotype threats.  And that is an evaluative task.  Think about it.  I think it captures the What do all of us need to know that few of us knows?” question.

 

 

 

The US elections are over; the analysis is mostly done;  the issues are still issues.  Well come, the next four years.  As Dickens said, It is the best of times; it is the worst of times.  Which? you ask–it all depends and that is the evaluative question of the day.

So what do you need to know now?  You need to help someone answer the question, Is it effective?  OR (maybe) Did it make a difference?

The Canadian Evaluation Society, the Canadian counter part to the American Evaluation Association has put together a series (six so far) of pamphlets for new evaluators.  This week, I’ve decided to go back to the beginning and promote evaluation as a profession.

Gene Shackman (no picture could be found) originally organized these brief pieces and is willing to share them.  Gene is an applied sociologist and director of the Global Social Change Research Project.  His first contribution was in December 2010; the most current, November 2012.

Hope these help.

Although this was the CES fourth post (in July, 2011), I believe it is something that evaluators  and those who woke up and found out they were evaluators need before any of the other booklets. Even though there will probably be strange and unfamiliar words in the booklet, it provides a foundation.  Every evaluator will know some of these words; some will be new; some will be context specific.   Every evaluator needs to have a comprehensive glossary of terminology. The glossary was compiled originally by the International Development Evaluation Association.  It is available for down load in English, French, and Arabic and is 65 pages.

CES is also posting a series (five as of this post) that Gene Shackman put together.  The first booklet, posted by CES in December, 2010 is called “What is program evaluation?” and is a 17 page booklet introducing program evaluation.  Shackman tells us that “this guide is available as a set of smaller pamphlets…” here.

In January, 2011, CES published the second of these booklets.  Evaluation questions addresses the key questions about program evaluation and is three pages long.

CES posted the third booklet in April, 2011.  It is called “What methods to use” and can be found here.  Shackman discusses briefly the benefits and limitations of qualitative and quantitative methods, the two main categories of answering evaluation questions.  A third approach that has gained credibility is mixed methods.

The next booklet, posted by CES in October 2012, is on surveys.  It “…explains what they are, what they are usually used for, and what typical questions are asked… as well as the pros and cons of different sampling methods.

The most recent booklet just posted (November, 2012) is about qualitative methods such as focus groups and interviews.

One characteristic of these five booklets is the additional resources that Shackman lists for each of the topics.  I have my favorites (and I’ve mentioned them from time to tine; those new to the field need to develop favorite sources.

What is important is that you embrace the options…this is  only one way to look at evaluation.

 

 

 

 

 

 

 

I spent much of the last week thinking about what I would write on November 7, 2012.

Would I know anything before I went to bed?  Would I like what I knew?  Would I breathe a sigh of relief?

Yes, yes, and yes, thankfully.  We are one nation and one people and the results of yesterday demonstrate that we are also evaluators.

Yesterday is a good example that everyday we evaluate.  (What is the root of the word evaluation?)  We review a program (in this case the candidates); we determine the value (what they say they believe); we develop a rubric (criteria); we support those values and that criteria; and we apply those criteria (vote).  Yesterday over 117 million people did just that.  Being a good evaluator I can’t just talk about the respondents without talking about the total population–the total number of possible respondents. One guess estimates that  169 million people are  registered to vote – 86 million Democrat – 55 million Republican – 28 million others registered.  The total response rate for this evaluation was 69.2%.  Very impressive–especially given the long lines. (Something the President said that needed fixing [I guess he is an evaluator, too.])

I am reminded that Senators and Representatives are elected to represent the voice of the people.  Their job is to represent you.  If they do not fulfill that responsibility, it is our responsibility to do something about it.  If you don’t hold them accountable, you can’t complain about the outcome.  Another evaluative activity.  (Did I ever tell you that evaluation is a political activity…?)  Our job as evaluators doesn’t stop when we cast our ballot; our job continues throughout the life of the program (in this case, the term in office).  Our job is to use those evaluation results to make things better.  Often, use is ignored.  Often, the follow-through is missing.  As evaluators, we need to come full circle.

Evaluation is an everyday activity.

 

 

 

What is the difference between need to know and nice to know?  How does this affect evaluation?  I got a post this week on a blog I follow (Kirkpatrick) that talks about how much data does a trainer really need?  (Remember that Don Kirkpatrick developed and established an evaluation model for professional training back in the 1954 that still holds today.)

Most Extension faculty don’t do training programs per se, although there are training elements in Extension programs.  Extension faculty are typically looking for program impacts in their program evaluations.  Program improvement evaluations, although necessary, are not sufficient.  Yes, they provide important information to the program planner; they don’t necessarily give you information about how effective your program has been (i.e., outcome information). (You will note that I will use the term “impacts” interchangeably with “outcomes” because most Extension faculty parrot the language of reporting impacts.)

OK.  So how much data do you really need?  How do you determine what is nice to have and what is necessary (need) to have?  How do you know?

  1. Look at your logic model.  Do you have questions that reflect what you expect to have happen as a result of your program?
  2. Review your goals.  Review your stated goals, not the goals you think will happen because you “know you have a good program”.
  3. Ask yourself, How will I USE these data?  If the data will not be used to defend your program, you don’t need it.
  4. Does the question describe your target audience?  Although not demonstrating impact, knowing what your target audience looks like is important.  Journal articles and professional presentations want to know this.
  5. Finally, ask yourself, Do I really need to know the answer to this question or will it burden the participant.  If it is a burden, your participants will tend to not answer, then you  have a low response rate; not something you want.

Kirkpatrick also advises to avoid redundant questions.  That means questions asked in a number of ways and giving you the same answer; questions written in positive and negative forms.  The other question that I always include because it will give me a way to determine how my program is making a difference is a question on intention including a time frame.  For example, “In the next six months do you intend to try any of the skills you learned to day?  If so, which one.”  Mazmaniam has identified the best predictor of behavior change (a measure of making a difference) is stated intention to change.  Telling someone else makes the participant accountable.  That seems to make the difference.

 

Reference:

Mazmanian, P. E., Daffron, S. R., Johnson, R. E., Davis, D. A., & Kantrowits, M. P. (1998).   Information about barriers to planned change: A Randomized controlled trail involving continuing medical education lectures and commitment to change.  Academic Medicine, 73(8).

 

P.S.  No blog next week; away on business.

 

 

 

The topic of complexity has appeared several times over the last few weeks.  Brian Pittman wrote about it in an AEA365; Charles Gasper used it as a topic for his most recent blog.  Much food for thought, especially as it relates to the work evaluators do.

Simultaneously, Harold Jarche talks about connections.  To me connections and complexity are two side of the same coin. Something which is complex typically has multiple parts.  Something which has multiple parts is connected to the other parts.  Certainly the work done by evaluators has multiple parts; certainly those parts are connected to each other.  The challenge we face is  logically defending those connections and in doing so, make explicit the parts.  Sound easy?  Its not.

 

That’s why I stress modeling your project before you implement it.  If the project is modeled, often the model leads you to discover that what you thought would happen because of what you do, won’t.  You have time to fix the model, fix the program, and fix the evaluation protocol.  If your model is defensible and logical, you still may find out that the program doesn’t get you where you want to go.  Jonny Morell writes about this in his book, Evaluation in the face of uncertaintyThere are worse things than having to fix the program or fix the evaluation protocol before implementation.  Keep in mind that connections are key; complexity is everywhere.  Perhaps you’ll have an Aha! moment.

 

I’ll be on holiday and there will not be a post next week.  Last week was an odd week–an example of complexity and connections leading to unanticipated outcomes.

 

Bright ideas are often the result of  “Aha” moments.  Aha moments  are “The sudden understanding or grasp of a concept…an event that is typically rewarding and pleasurable.  Usually, the insights remain in our memory as lasting impressions.” — Senior News Editor for Psych Central.

How often have you had an “A-ha” moment when you are evaluating?  A colleague had one, maybe several, that made an impression on her.  Talk about building capacity–this did.  She has agreed to share that experience, soon (the bright idea).

Not only did it make an impression on her, her telling me made an impression on me.  I am once again reminded of how much I take evaluation for granted.  Because evaluation is an everyday activity, I often assume that people know what I’m talking about.  We all know what happens when we assume something….  I am also reminded how many people don’t know what I consider basic  evaluation information, like constructing a survey item (Got  Dillman on your shelf, yet?).

 

What is this symbol called?  No, it is not the square root sign–although that is its function.  “It’s called a radical…because it gets at the root…the definition of radical is: of or going to the root or origin.”–Guy McPherson

How radical are you?  How does that relate to evaluation, you wonder?  Telling truth to power is a radical concept (the definition here is departure from the usual or traditional); one to which evaluators who hold integrity sacrosanct adhere. (It is the third AEA guiding principle.)  Evaluators often, if they are doing their job right, have to speak truth to power–because the program wasn’t effective, or it resulted in something different than what was planned, or it cost too much to replicate, or it just didn’t work out .  Funders, supervisors, program leaders need to know the truth as you found it.


“Those who seek to isolate will become isolated themselves.”Diederick Stoel  This sage piece of advice is the lead for Jim Kirkpatrick’s quick tip for evaluating training activities.  He says, “Attempting to isolate the impact of the formal training class at the start of the initiative is basically discounting and disrespecting the contributions of other factors…Instead of seeking to isolate the impact of your training, gather data on all of the factors that contributed to the success of the initiative, and give credit where credit is due. This way, your role is not simply to deliver training, but to create and orchestrate organizational success. This makes you a strategic business partner who contributes to your organization’s competitive advantage and is therefore indispensable.”  Extension faculty conduct a lot of trainings and want to take credit for the training effectiveness.  It is important to recognize that there may be other factors at work–mitigating factors; intermediate factors; even confounding factors.  As much as Extension faculty want to isolate (i.e., take credit), it is important to share the credit.

I started this post back in April.  I had an idea that needed to be remembered…it had to do with the unit of analysis; a question which often occurs in evaluation.  To increase sample size and, therefore,  power, evaluators often choose run analyses on the larger number when the aggregate, i.e., smaller number is probably the “true” unit of analysis.  Let me give you an example.

A program is randomly assigned to fifth grade classrooms in three different schools.  School A has three classrooms; school B has two classrooms; and school C has one classroom.  All together, there are approximately 180 students, six classrooms, three schools.  What is the appropriate unit of analysis?  Many people use students, because of the sample size issue.  Some people will use classroom because each got a different treatment.  Occasionally, some evaluators will use schools because that is the unit of randomization.  This issue elicits much discussion.  Some folks say that because students are in the school, they are really the unit of analysis because they are imbedded in the randomization unit.  Some folks say that students is the best unit of analysis because there are more of them.  That certainly is the convention.  What you need to decide is what is the unit and be able to defend that choice.  Even though I would loose power, I think I would go with the the unit of randomization.  Which leads me to my next point–truth.

At the end of the first paragraph, I use the words “true” in quotation marks. The Kirkpatricks in their most recent blog opened with a quote from the US CIA headquarters in Langley Virginia, “”And ye shall know the truth, and the truth shall make you free”.   (We wont’ talk about the fiction in the official discourse, today…)   (Don Kirkpatrick developed the four levels of evaluation specifically in the training and development field.)  Jim Kirkpatrick, Don’s son, posits that, “Applied to training evaluation, this statement means that the focus should be on discovering and uncovering the truth along the four levels path.”  I will argue that the truth is how you (the principle investigator, program director, etc.) see the answer to the question.  Is that truth with an upper case “T” or is that truth with a lower case “t”?  What do you want it to mean?

Like history (history is what is written, usually by the winners, not what happened), truth becomes what do you want the answer to mean.  Jim Kirkpatrick offers an addendum (also from the CIA), that of “actionable intelligence”.  He goes on to say that, “Asking the right questions will provide data that gives (sic) us information we need (intelligent) upon which we can make good decisions (actionable).”  I agree that asking the right question is important–probably the foundation on which an evaluation is based.  Making “good decisions”  is in the eyes of the beholder–what do you want it to mean.

“Resilience = Not having all of your eggs in one basket.

Abundance = having enough eggs.”

Borrowed from and appearing in the blog by Harold Jarche, Models, flows, and exposure, posted April 28, 2012.

 

In January, John Hagel blogged in  Edge Perspectives:  “If we are not enhancing flow, we will be marginalized, both in our personal and professional life. If we want to remain successful and reap the enormous rewards that can be generated from flows, we must continually seek to refine the designs of the systems that we spend time in to ensure that they are ever more effective in sustaining and amplifying flows.”

That is a powerful message.  Just how do we keep from being marginalized, especially when there is a shifting paradigm?  How does that relate to evaluation?  What exactly do we need to do to keep evaluation skills from being lost in the shift and be marginalized?  Good questions.

The priest at the church I attend is retiring, after 30 years of service.  This is a significant and unprecedented change (at least in my tenure there).  Before he left for summer school in Minnesota, he gave the governing board a pep talk that has relevance to evaluation.  He posited that what we needed to do was not focus on what we needed, rather focus on what strengths and assets we currently have and build on them.  No easy task, to be sure.  And not the  usual approach for an interim.  The usual approach is what do we want; what do we need for this interim.  See the shifting paradigm?  I hope so.

Needs assessment is often the same approach–what do you want; what do you need.  (Notice the use of the word “you” in this sentence; more on that later in another post.)  A well intentioned evaluator recognizes that something is missing or lacking and conducts a needs assessment documenting that need/lack/deficit.  What would happen, do you think, if the evaluator documented what assets existed and developed a program to build that capacity?  Youth leadership development has been building programs to build assets for many years (See citations below).  The approach taken by the youth development professionals is that there are certain skills, or assets, which, if strengthened, build resilience.  Buy building resilience, needs are mitigated; problems solved or avoided; goals met.

So what would happen if, when conducting a “needs” assessment, an evaluator actually conducted an asset assessment and developed programs to benefit the community by building capacity which strengthened assets and built resiliency?  Have you ever tried that approach?

By focusing on strengths and assets instead of weaknesses and liabilities, programs could be built that would benefit more than a vocal minority.  The greater whole could benefit.  Wouldn’t that be novel?  Wouldn’t that be great!

Citations:

1.  Benson, P. L. (1997).  All Kids are Our Kids.  San Francisco:  Jossey-Bass Publishers

2.  Silbereisen, R. K. & Lerner, R. M. (2007).  Approaches to Positive Youth Development. Los Angeles: Sage Publications.

 

An important question that evaluators ask is, “What difference is this program making?”  Followed quickly with, “How do you know?”

Recently, I happened on a blog called {grow} and the author, Mark Schaefer,  had a post called, “Did this blog make a difference?”  Since this is a question as an evaluator I am always asking, I jumped on the page.  Mr. Schaefer is in marketing and as a marketing expert he says the following, “You’re in marketing for one reason: Grow. Grow your company, reputation, customers, impact, profits. Grow yourself. This is a community that will help. It will stretch your mind, connect you to fascinating people, and provide some fun along the way.”  So I wondered how relevant this blog would be to me and other evaluators whether they blogged or not.

Mr. Schaefer is taking stock of his blog–a good thing to do for a blog that has been posted for a while.  So although he lists four innovations, he asks the reader to “…be the judge if it made a difference in your life, your outlook, and your business.”  The four innovations are

  1. Paid contributing columnists.  He actually paid the folks who contributed to his blog; not something those of us in Extension can do.
  2. {growtoons}. Cartoons designed specifically for the blog that “…adds an element of fun and unique social media commentary.”  Hmmm…
  3. New perspectives. He showcased fresh deserving voices; some that he agreed with and some that he did not.  A possibility.
  4. Video. He did many video blogs and that gave him the opportunity to “…shine the light on some incredible people…”  He interviews folks and posts the short video.  Yet another possibility.

His approach seems really different to what I do.  Maybe it is the content; maybe it is the cohort; maybe it is something else.  Maybe there is something to be learned from what he does.  Maybe this blog is making a difference.  Only I don’t know.  So, I take a clue from Mr. Schaefer and ask you to judge if it has made a difference in what you do–then let me know.  I’ve imbedded a link  to a quick survey that will NOT link to you nor in anyway identify you.  I will only be  using the findings for program improvement.  Please let me know.  Click here to link to the survey.

 

Oh, and I won’t be posting next week–spring break and I’ll be gone.

 

Ellen Taylor-Powell, UWEX Evaluation Specialist Emeritus, presented via webinar from Rome to the WECT (say west) cohorts today.  She talked about program planning and logic modeling.  The logic model format that Ellen developed was picked up by USDA, now NIFA, and disseminated across Extension.  That dissemination had an amazing effect on Extension, so much so that most Extension faculty know the format and can use it for their programs.

 

Ellen went further today than those resources located through hyperlinks on the UWEX website.  She cited the work by Sue Funnell and Patricia J. Rogers, Purposeful program theory: Effective use of theories of change and logic models  . It was published in March, 2011.  Here is what the publisher (Jossey-Bass, an imprint of Wiley) says:

Between good intentions and great results lies a program theory—not just a list of tasks but a vision of what needs to happen, and how. Now widely used in government and not-for-profit organizations, program theory provides a coherent picture of how change occurs and how to improve performance. Purposeful Program Theory shows how to develop, represent, and use program theory thoughtfully and strategically to suit your particular situation, drawing on the fifty-year history of program theory and the authors’ experiences over more than twenty-five years.

Two reviewers who I have mentioned before, Michael Quinn Patton and E. Jane Davidson, say the following:

“From needs assessment to intervention design, from implementation to outcomes evaluation, from policy formulation to policy execution and evaluation, program theory is paramount. But until now no book has examined these multiple uses of program theory in a comprehensive, understandable, and integrated way. This promises to be a breakthrough book, valuable to practitioners, program designers, evaluators, policy analysts, funders, and scholars who care about understanding why an intervention works or doesn’t work.” —Michael Quinn Patton, author, Utilization-Focused Evaluation

“Finally, the definitive guide to evaluation using program theory! Far from the narrow ‘one true way’ approaches to program theory, this book provides numerous practical options for applying program theory to fulfill different purposes and constraints, and guides the reader through the sound critical thinking required to select from among the options. The tour de force of the history and use of program theory is a truly global view, with examples from around the world and across the full range of content domains. A must-have for any serious evaluator.” —E. Jane Davidson, PhD, Real Evaluation Ltd.

Jane is the author of the book, Evaluation Methodology Basics: The nuts and bolts of sound evaluation, published by Sage..  This book “…provides a step-by-step guide for doing a real evaluation.  It focuses on the main kinds of “big picture” questions that evaluators usually need to answer, and how the nature of such questions is linked to evaluation methodology choices.”  And although Ellen didn’t specfically mention this book, it is a worthwhile resource for nascent evaluators.

Two other resources that were mentioned today were Jonny Morell’s book, Evaluation in the face of uncertainty:  Anticipating surprise and responding to the inevitable. This volume was published by Guilford Press..  Ellen also mentioned John Mayne and his work in contribution analysis.  A quick web search provided this reference:  Mayne, J. (2008). Contribution analysis: An approach to exploring cause and effect. ILAC Brief No. 16. Rome, Italy: Institutional Learning and Change (ILAC) Initiative.  I’ll talk more about contribution analysis next week in TIMELY TOPICS.

 

If those of you who listened to Ellen remember other sources that she mentioned, let me know and I’ll put them here next week.