Part VI
B. Quesions About Evaluation

QUESTION 1:

How can designing and carrying out an evaluation benefit my organization and me?

Many people are skeptical about investing time, money, and energy into designing and carrying out an evaluation of their program, and perhaps with good reason. After all, their purpose is to get things done, not to do studies. They may also have had some experience with evaluations in the past that were not very useful. Perhaps they carried out the evaluation only because someone (e.g., the funder) told them it had to be done.

Keep in mind that, while you may have had negative prior experiences with evaluation, an evaluation that is well done can be very helpful.

How an Effective Evaluation Can Help You and Your Organization

• It can reaffirm what you and your organization are doing well and, in so doing, give you further confidence to move ahead.

• It can help you to learn from past experience in such a way that you can make mid-course adjustments to further improve your program.

• It can help you and your organization make some strategic decisions about the future of your program (e.g., Should it be continued or not? Should you try some new strategies?).

• It can reveal benefits of your program that you may not have detected and which, in turn, may open up new opportunities or help you in designing future programs.

• It can provide you and your organization with something to offer others: potential funders, to justify future funding; potential clients, to promote demand for your services; others working in similar areas, to share your experiences.

However, there are also pitfalls to a poorly designed and executed evaluation:

• It can be a waste of time and money and just another bureaucratic hoop to jump through.

• It can hurt the image of your program.

• It can be personally disempowering and ultimately a threat to you and your staff.

 

Sample Case:�A Positive Evaluation Experience

In late 1997 a retired official from the United States Agency for International Development (USAID) with extensive evaluation experience approached the Peruvian Institute for Education in Human Rights and Peace (IPEDEHP), a well-established Peruvian nongovernment organization (NGO). She offered to conduct a case study on IPEDEHP's program to train community leaders in human rights, democracy, and citizen participation. The purpose of the study was to assess the impact of IPEDEHP's program on the community leaders themselves, their families, and those in their communities with whom they shared what they had learned from the training.

The outcome, a combined evaluation and case study, entitled Weaving Ties of Friendship Trust and Commitment to Build Democracy and Human Rights in Peru, was published in early 1999. It has subsequently been disseminated widely throughout the world to human rights groups and specifically organizations working in or interested in human rights education.

The study brought many benefits to IPEDEHP, among them:

• Valuable feedback for IPEDEHP regarding what the leaders do with what they learn and the impact on their lives, for some a very pleasant surprise given the magnitude of the impact. This information helped IPEDEHP to confirm that they were on the right track.

• Useful information on what was working in the program and what needed to be fine-tuned. IPEDEHP was able to incorporate this information into the design of a proposal that they subsequently submitted to an outside donor for further funding for the community leaders program.

• Broad visibility and recognition from around the world for what IPEDEHP was doing which, among other things, earned them the 1999 Distinguished Award for Building Cultures of Peace from Psychologists for Social Responsibility, a U.S-based organization of some 3000 psychologists.

• Increasing demand within and outside of Peru for their training materials, which also brought sales income.

 


QUESTION 2:

When and under what circumstances should I carry out an evaluation of my program?

People do not often ask this very important question when they plan an evaluation. Too frequently an evaluation is carried out because someone says "it's time" (e.g., "We've been operating for two years and we need to do an evaluation"; "It's the end of our program and we should evaluate it"; "Our funder says we must evaluate our program").

In fact, an evaluation should be carried out only when the following conditions are met:

One or more key decisions must be made and the evaluation will yield important inputs to help make that decision(s).

You can do something with the results you get (i.e., you are in a position to make changes based on the recommendations coming out of the evaluation).

Principal audiences, stakeholders, and decision-makers support the evaluation and want to receive and use the results.

Under these circumstances by all means take the time and effort at the beginning to design a good evaluation that will meet your needs.

This is when you do not want to invest time and effort into doing an evaluation:

You and your staff have little interest in evaluating your program. You know it is going well and don't believe an evaluation can tell you anything you don't already know.

There is neither pressure nor a burning need to do an evaluation (e.g., to obtain further funding).

Even if you get some valuable evaluation results, the circumstances don't permit you to act on them (e.g., your board has already made up its mind to take a course of action and evaluation results will not influence their decision).

Principal audiences, stakeholders, and decision-makers are not interested in the idea of an evaluation or committed to using its results.

Under these circumstances, undertaking an evaluation runs the risk of being an expensive, time-consuming exercise in futility.

 

Sample Case: When To Do An Evaluation

The board of directors of a community-based human rights organization in the Mid-West decided that it was time to evaluate the after-school human rights program for youth that it had been supporting for years. The program seemed to be doing fairly well, but it lacked vitality. Parents continued enrolling their children in the program, but after going up steadily for a number of years, enrollments had begun to level off and actually decline. The board needed to figure out how to revitalize the program in order to determine whether or not the program should be continued.

After serious reflection, the board brought in a person with extensive evaluation experience to guide them on how to get started in carrying out the evaluation. This person helped them fine-tune the questions they were asking and helped them design the evaluation with the funds they had available. As it turned out, they didn't need to pay anyone to do the evaluation since a member of the board with prior evaluation experience volunteered to do it under the guidance of the outside person who also offered to work on a pro bono basis.

The board member and her advisor kept in close contact with the board throughout the process to make sure they were comfortable with the methodology. When they presented the results to the board, the recommendations made so much sense that the board immediately adopted them.

The after school human rights program, two years later, is thriving thanks to some very perceptive observations and recommendations made by the evaluation team. See Part VI, "Three Evaluation Scenarios," p. 150, for more detail on this case.

 

Sample Case: When Not To Do An Evaluation

A local school system had been implementing a human rights program in the upper elementary grades for several years. A special budget line from the local state senate funded the program. However, people in the school system didn't really like the design of the program and wanted to change it. They thought that by doing an external evaluation they could persuade the state senate to agree to a revision of the program. With great enthusiasm they designed a comprehensive evaluation, found an excellent evaluator, and raised the funds needed to do the evaluation.

Once the evaluation was completed, it showed that the program indeed had some serious design flaws. However, the individuals in the state senate were not interested in hearing the results. They had made up their minds that it was an important program and no evaluation data were going to stop them from continuing the program. Instead of reading the evaluation with interest, they rejected the evaluation out of hand, leaving those in the school system responsible for designing and carrying out the evaluation very frustrated.

 


QUESTION 3

How do I get started?

If you have satisfactorialy answered the first two questions (i.e., you have identified a real need for evaluation and you are certain that the results will be used for decision-making purposes in your organization), then you are ready to get started.

The next step is to invest the necessary time and effort to answer the following questions thoughtfully and thoroughly:

A. Who is the audience for the evaluation?

In other words, who will be interested in reading the evaluation when it is done and who will implement its recommendations? Depending on who you are, what your organization is doing, and what decisions need to be made, you can have one or multiple audiences, among them:

• People designing and implementing programs;

• The Executive Director and/or Board of your organization;

• The agency funding your program;

• Clients and other groups you are trying to reach.

Once you have identified these different audiences, it is critical to take the time at the very beginning of the process to get everyone involved to identify what they want from the results of the evaluation. This step is often difficult, as these are busy people involved in many activities.

For this first step it is incumbent upon whoever is responsible for your evaluation to spend time with your audience(s), either individually or in small groups, to help them think through (1) what questions they might have that the evaluation can help to answer and (2) how, once they obtain the answers to these questions, they will use this information in their decision making. If you fail to take this time at the beginning to ensure their involvement and commitment, you run the risk of producing an evaluation that your key audience(s) will ignore when they come to make the very decisions this evaluation was intended to facilitate.

B. When will your audience(s) need the information from the evaluation?

It is very important to determine if the evaluation is to be used to make a specific decision. For example, if your audience is going to make a major decision in four months, then they need the evaluation report in time to use it in making this decision. Designing and carrying out an evaluation that reaches decision-makers one month after their decision has been made is pointless. The decision will already be made and the evaluation (including all your time, effort and the resources that went in to carrying it out) will be "history."

C. In what format should the evaluation be delivered?

You want to present the evaluation results in a way that is useful to your audience(s). Too often evaluations are prepared with very nice binders and lots of impressive charts and graphs, but readers can't easily find the crucial information they are looking for. And even when they do, they often still cannot find the answers to their questions. Such evaluations often end up unread, gathering dust on a bookshelf or simply thrown way, along with all the time and money spent to create them.

To avoid this waste and frustration, make sure you package the results in a way that is easy to read, attractive, and responsive to the needs of the audience. Often an introductory three-to-five-page summary, primarily written with bullets, is enough. Sometimes charts and graphs are helpful.

If you want your audience to use the outcomes of the evaluation, you must take the time to determine how best to deliver these outcomes to them in a form they will read and use. Use the personal interactions described in point A above to get ideas from your audience about a "user-friendly" report.

D. How much money is available to carry out the evaluation?

Before going any further in planning your evaluation, be realistic and determine what your budget is for the evaluation. Sometimes, especially if you have a grant from an outside funder that requires one or more evaluations as part of the grant, you have some funds set aside for the evaluation. If not, then you either need to design an evaluation that can be done in-house by staff with possible volunteer support, or you need to look for funding for the evaluation.

The important thing is to be realistic: don't design an ideal evaluation that you can't afford. If you do, you are stopped before you start. If you don't have the necessary money but you REALLY need that evaluation, then strategize with your key audiences (especially if they are the funders and/or your board) on ways to get the funding you need. If possible, ask them to help you.

E. Who will carry out the evaluation?

You have several choices.

• You could do the evaluation in-house. If what you need is relatively straightforward (e.g., a review and update of your materials and/or your training course) and you have someone on your staff with the capabilities, you may be able to do this internally. You might also consult some evaluation resources and/or talk to people outside your organization who have extensive experience in evaluation.

• You could contract it out to an evaluation specialist. If what you need goes beyond your in-house expertise, then you will probably need to hire an outside evaluation expert who can do the evaluation for you.

• You could make the evaluation a joint effort, bringing in an outside specialist to help you and your staff design and carry out the evaluation. See Question 6 on page 144 for some guidelines in choosing an external evaluator.

For some examples of different scenarios in terms of audiences, timing for the evaluation report, substance of report, funding, and who does the evaluation, see Part VI, "Three Evaluation Scenarios," p. 150.

F. How elaborate must the evaluation be?

Some people think that, in order to be credible, evaluations must be "scientifically" carried out: one needs to do a pre-test and a post-test, have an experimental group that participated in the human rights education program and a control group that didn't, and be able to come up with statistical comparisons that show a significant difference between the experimental group and the control group.

While this has long been the standard method of evaluation, an increasing volume of literature argues that this approach�more characteristic of academic research�doesn't always work in the "real world," especially for programs that deal with on-going social problems. More important is sizing up your audiences, deciding at the beginning what kind of information they are going to need to make a decision, and giving them the information in time to affect decisions. More often than not, the decision they need to make will not require an elaborate evaluation. They need only answers to a few rather straightforward questions.

Michael Quinn Patton, an experienced program evaluator and author of Utilization Focused Evaluation, a widely used book on evaluation, explains this alternative method succinctly:

Decision-makers regularly face the need to take action with limited and imperfect information. They prefer more accurate information to less accurate information, but they also prefer some information to no information.� There is no rule of thumb that tells an evaluator how to focus an evaluation question. The extent to which a research question is broad or narrow depends on the resources available, time available, and the needs of decision-makers. These are choices not between good and bad, but among alternatives, all of which have merit.1


QUESTION 4.

What are some of the more commonly asked evaluation questions?

Some people are under the mistaken impression that there is a one-size-fits-all blueprint for carrying out evaluations. Unfortunately, this is not the case. The questions and methodologies will vary tremendously, based on the initial concerns that lead you to conduct an evaluation and your plans for using the results.

To guide you in this process, some of the more commonly asked evaluation questions are listed below. Which you use and what other questions you add will depend on your audiences and the information they need from the evaluation to make decisions.

Commonly Asked Evaluation Questions

• Has the project achieved its objectives (e.g., successfully developed a new human rights education curriculum and materials, successfully trained staff in the curriculum and materials, successfully delivered courses)? If not, why?

• Were the required resources for the program clearly defined (e.g., technical assistance, purchase of materials) and appropriate? If not, why? What actions were taken to address problems that might have arisen?

• How well was the project managed? If management problems arose, what actions were taken to address them?

• Did project activities take place on schedule (e.g., development of materials, design of curriculum, design and/or delivery of courses, radio/TV spots)? If there were delays, what caused them? What actions were taken to correct them?

• Did the project have the desired impact (e.g., did it result in changes in knowledge, attitudes, and practices of teachers and/or students in the human rights arena)? If not, why? Did the project have any unintended impacts?

• Is the project replicable and/or sustainable? Was it cost effective?

• What were the lessons learned? For others who might want to reproduce or adapt your project? If you want to expand this project to other sites?


QUESTION 5.

What are some tools for answering your evaluation questions?

An evaluator, like any expert, should have a "Tool Kit" containing a wide variety of evaluation tools ranging from highly quantitative to highly qualitative. The trick is to decide which tools are most appropriate, given the questions asked and the audience's information needs.

Below is a list of commonly used evaluation tools. Following each, for illustrative purposes, are circumstances in which you might want to make use of that tool.

Some of the More Commonly Used Evaluation Tools and When to Use Them

A. Structured questionnaires and interviews

When to use:

• When you have specific information you want to obtain and know what your questions are (e.g., you want to get feedback on what trainees thought of a training course; you want to find out how participants used what they learned).

B. Interviews (semi-structured, open-ended)

When to use:

• When you want to get at how the program has impacted an individual in terms of changes in attitudes and self-perception (e.g., participant in a human rights training program; someone the trainee has in turn trained). Semi-structured and open-ended interviews are especially useful in this context.

• When you want to identify unintended results that you may not have anticipated and thus looked for in a structured interview or questionnaire (e.g., personal impacts, what participants have done with the training).

C. Tests

When to use:

• At the beginning and end of a training course to assess what participants have learned or measure changes in attitudes.

D. Observation

When to use:

• In a classroom to see if the teacher trained is appropriately integrating human rights into teaching and classroom management practices.

• When a human rights training program is being piloted to assess how participants are reacting and interacting or to what extent participants understand and use the methodology and materials.

• With a pre-established observation checklist to make sure you are observing aspects of specific interest.

E. Case studies

When to use:

• You want to see what people do with the training within the context they are working and you want the flexibility to follow trails and/or examine the individual or program you are assessing within a broader cultural context.

F. Group interviews or focus groups

When to use:

• When you lack the time and/or resources to make individual interviews

• When you want to obtain some information that would be enriched by having the people to be interviewed interacting with and listening to one another (e.g., What did participants think of the training program they attended? How did they use what they learned? How did it impact them and their communities?).

• When you want to enrich the data obtained through individual interviews or to test out results from individual interviews with a larger group of individuals to see if you obtain similar responses.

G. Project records

When to use:

• When you want to collect basic information that is already available in project records (e.g., how many people were trained and what their characteristics were, when the training took place, how many and what types of materials were distributed, how much they cost).

When, how, and in what combination these tool are to be used depends on the questions you are asking, as well as the time and resources available to carry out the evaluation.

For some example of different evaluation scenarios and combinations of tools to be used, see Part VI, "Three Evaluation Scenarios," p. 150. For further information on evaluation tools, please see Evaluation in the Human Rights Education Field: Getting Started by Felisa Tibbitts.2


QUESTION 6:

How do we go about selecting an evaluator?

You have decided you need an external evaluator. You have identified your audiences, you have received their endorsement of the evaluation, and with them you have begun to identify the key questions and the uses for the evaluation. How do you find an evaluator who suits your needs?

The first thing you need to do is develop a profile of the kind of skills your evaluator should have:

• Is this evaluation going to require someone with strong quantitative skills (e.g., are you going to have to select a random sample and when you have the data, do statistical tabulations)? Or given the nature of your questions, are you looking for someone with strong qualitative skills? Perhaps you need someone with both.

• How important is it that the person has extensive knowledge and/or background in human rights generally and specifically in the program being evaluated?

• How closely will you want the evaluator to work with you and your staff (e.g., do you want to have him or her work independently or as a member of a team that includes some in-house staff)?

• Do you have a clear idea of the evaluation design and just want someone to implement it? Or are you looking for an evaluator who will help you and your audiences fine-tune the evaluation questions and suggest the most appropriate methodology for answering those questions?

• Do you want to hand the job to the evaluator and let him or her "run with it," or do you want to be directly involved throughout the process? If the latter, what does this mean for the kind of person you want to bring in as an evaluator?

Once you have the profile of your evaluator in mind and understand the type of involvement you want in the design and conduct of the evaluation, the next step is to reach out to other organizations in your community that have recently carried out evaluations. They do not necessarily need to be involved in human rights education. Ask them about their experiences with the evaluators. These inquiries may generate names of people whom you would like to interview and get references.

Another way to identify evaluators is to contact organizations in your area that specialize in educational evaluation. If you can describe the kind of evaluator you are seeking, they may be able to come up with some names.

Most important is selecting someone with whom you are comfortable, and not just because he or she has the requisite evaluation skills. Ultimately you need to select someone that you feel will listen to you, someone who will attempt to accommodate your needs, and if you decide to combine the efforts of an outside evaluator with people working in-house, someone who is a real team player.


QUESTION 7:

What are some special challenges/opportunities for designing, implementing, and using evaluations in human rights?

Unlike objective subjects like mathematics and science, human rights education necessarily seeks to impart more than knowledge and tools that can be used to apply that knowledge. It also involves addressing core human rights for respect, dignity, and tolerance, as well as recognizing that while we are all different, we are equal. Human rights education further requires that these lessons be learned not only intellectually, but also personally, taking action to live them in our classrooms, homes, and communities. These are the values that underlie the Universal Declaration of Human Rights.

Furthermore, education for human rights means working with individuals, both neophytes and seasoned activists, who come from different backgrounds and life experiences and who may be applying what they learn in human rights in different ways, depending on the community needs and their particular interests.

Measuring whether these concepts are well understood and applied, especially when evaluating impacts, raises a number of difficult questions: How can you ascertain that people are treating each other with respect as a result of a course in human rights? How can you determine whether participants in a course have increased in their feelings of self-worth? Or their understanding of what to do when their rights are violated?

While "standard" evaluation methodologies such as surveys, questionnaires, and tests of knowledge are particularly good for seeing what people have learned. You may find that you need to exercise more creativity if a key objective of your evaluation is to see whether, as a result of a human rights education program, there have been changes in people's lives. In such cases, more qualitative instruments, like case studies and open-ended interview observations, might be appropriate.


QUESTION 8.

How do I use evaluation results in making DECISIONS about my program?

Finally, you should keep in mind that data from evaluations is typically only one of several sources of information used by an organization in making decisions. There will inevitably be some considerations of a "political" nature (e.g., how will influential people in the community receive the decisions to be taken with the evaluation data? Will taking this decision unnecessarily alienate some members of the community?). And inevitably certain individuals in the audience will have their pre-conceived notions challenged and be made to feel insecure.

However, if the evaluation touches on the right questions, if the information has been packaged in a way that decision makers can readily use it to make their decisions, and if the audience has "bought into" the evaluation from the start, the chances are good that the evaluation will be put to good use.

Once you have completed the evaluation, the challenge is to bring the information to the attention of the decision-makers when they make the decisions. There is no substitute for being in the right place, at the right time, with the right information (e.g., figuring out when the decision(s) that the evaluation is to inform are going to be made, getting the evaluation report into the hands of the right people so that they can actually use it). An ideal way of accomplishing this task is to get a briefing on the evaluation results on the agenda in the meeting where the decision is to be made and making sure that the decision makers have a packet that summarizes the results and their implications for the decision(s) to be taken before the meeting.

In other words, having completed the evaluation, you are now the advocate bringing to your decision-makers the information that you want them to use in making their decision. But keep in mind that in reality evaluation results are often only one important source of information among several that will be used in making a given decision.

 

Sample Case: How Evaluation Results Were Brought to Bear on the Decision-Making Process

The YWCA in a major US city had just completed a pilot community education program on the UN Convention on the Rights of the Child. The program built on the parent-child relationship by bringing parents and their pre-school children together for twelve two-hour sessions on different topics. Each session included parent-child activities as well as periods when adults focused on the day's topic with a parent educator while an early-childhood educator helped the children practice skills for living in a democracy in another room. For example, in the session on the child's right to a name and nationality, family groups made flags of the places where the ancestors came from, sorted different kinds of rice, pasta or beans, constructed American flags, and discussed the origin of the child's name. In their separate period, parents discuss the origin of their own names, the meanings, values, and conflicts of nationality and collective identity, and the impact of these for their children. Meanwhile the children also engaged in activities involving their names, countries of origin, and the colors and symbols of the American flag.3

The YWCA wondered whether the program, which was based in a mostly European-American suburb, might work in other settings and other parts of the country. The program also drew the attention of state authorities, who admired the citizenship, problem solving, and critical thinking skills the program seemed to impart, as well as its emphasis on empowerment and responsibility. They wanted to know if this program would be appropriate for use with immigrant and Native American communities in the state.

To conduct the evaluation, the YWCA and the state selected an external evaluator who was herself a Native American. The evaluator observed the weekly sessions, interviewed parents and staff to obtain their opinions on what they thought of the program, and conducted a pre-test/ post-test to assess changes in knowledge, attitudes and practices on the part of the parents regarding child health and human rights. She also brought in some representatives from nearby Native American and immigrant communities to review the materials, observe the sessions, and comment on what�if anything�might need to be adapted were the program to be implemented in their communities.

The evaluator found that both the parents and their pre-school children were enthusiastic about the program. Parents especially appreciated getting to know other families with young children The evaluator also found that parents had learned a great deal and seemed to be applying what they were learning at home. Nevertheless she and the fellow Native Americans she brought in concluded that there were a number of elements that needed to be adjusted were it to be used with their communities, especially those located in rural areas. Like wise the observers from immigrant communities felt that the program needed to reflect both the culture and immigrant status of their respective communities. The evaluator thus recommended that before an expanded program be implemented, educators in the Native American and immigrant communities should work with the YWCA program developers to adapt the existing curriculum to the cultural or their respective groups and train members of their communities to run the programs.

The evaluator, knowing that both the YWCA board and the state authorities who commissioned the evaluation didn't have a lot of time to focus on the results, prepared a short report in which she presented the results in bullets complemented by some testimonies from parents. Each report was tailored to answering the similar but slightly different questions of the two institutions. At the end of the reports she provided some concrete suggestions for adapting the curriculum and materials.

The report was very well received by the YWCA and state authorities, both of whom accepted the evaluator's recommendations in their entirety and agreed to work together to carry them out.


1 Michael Quinn Patton, Utilization Focused Evaluations (Thousand Oaks, California: Sage Publications, 1997) 2 2 3 4 - 5 .
2 Felisa Tibbitts, Evaluation in the Human Rights Education Field: Getting Started (The Hague: Netherlands Helsinki Committee, 1997). Available on-line at www.hrea.org.
3 This fictionalized case is based on a real curriculum: Lori DuPont, Joanne Foley, and Annette Gagliardi, Raising Children with Roots, Rights & Responsibilities: Celebrating the UN Convention on the Rights of the Child (Minneapolis: Human Rights Resource Center, 1999).