Empowerment evaluation is a stakeholder involvement approach designed to provide groups with the tools and knowledge they need to monitor and evaluate their own performance and accomplish their goals.
It is also used to help groups accomplish their goals. Empowerment evaluation focuses on fostering self-determination and sustainability. It is particularly suited to the evaluation of comprehensive community–based initiatives or place-based initiatives.
Empowerment evaluation is the use of evaluation concepts, techniques, and findings to foster improvement and self-determination (Fetterman, 1994a). An expanded definition is:
Empowerment evaluation is an evaluation approach that “aims to increase the probability of achieving program success by (1) providing program stakeholders with the tools for assessing the planning, implementation, and self-evaluation of their program, and (2) mainstreaming evaluation as part of the planning and management of the program/organization"
(Wandersman et al., 2005)
Empowerment evaluation was introduced to the field of evaluation in 1993 (Fetterman, 1993). It has been used in remote Amazonian regions as well as corporate offices of Hewlett-Packard in Silicon Valley. Empowerment evaluation has been used by NASA/Jet Propulsion Laboratory to educate youth about the prototype Mars Rover, townships in South Africa to create sustainable community health initiatives, the U.S. Department of Education’s Office of Special Education and Rehabilitation Services to foster self-determination, and with Native American tribes to build technological and economic infrastructures on reservations. It has also been used to address academic distress (Fetterman, 2005), accreditation in higher education (Fetterman, 2001; Fetterman, 2012; Fetterman, 2014), minority tobacco prevention (Fetterman, 2005; Fetterman 2014) and medical education (Fetterman, 2009; Fetterman, Deitz, and Gesundheit, 2010). Empowerment evaluation is international in scope. It is applied in over 16 countries. A sample of the literature situating empowerment evaluation in the field of evaluation is included at the end of this page.
Empowerment evaluation in practice is typically applied along two streams. The first is practical and the second transformative. Practical empowerment evaluation is similar to formative evaluation. It is designed to enhance program performance and productivity. It is still controlled by program staff, participants, and community members. However, the focus is on practical problem solving, as well as programmatic improvements and outcomes.
Transformative empowerment evaluation (Fetterman, 2015) highlights the psychological, social, and political power of liberation. People learn how to take greater control of their own lives and the resources around them. The focus in transformative empowerment evaluation is on liberation from pre-determined, conventional roles and organizational structures or “ways of doing things.” In addition, empowerment is a more explicit and apparent goal.
A number of theories guide empowerment evaluation:
Empowerment theory focuses on gaining control of resources in one’s environment. It also provides a guide for the role of the empowerment evaluator.
Self-determination theory highlights specific mechanisms or behaviors that enable the actualization of empowerment.
Process use cultivates ownership by placing the approach in community and staff members’ hands.
Theories of use and action explain how empowerment evaluation helps people “walk their talk” and produce desired results.
These theories are described in more detail below.
Empowerment theory is about gaining control, obtaining resources, and understanding one’s social environment. It is also about problem solving, leadership, and decision making. It operates on many levels, and distinguishing between empowering processes and outcomes is critical. According to Zimmerman (2000):
The process is empowering if it helps people develop skills so they can become independent problem solvers and decision makers. Empowering processes will vary across levels of analysis. For example, empowering processes for individuals might include organizational or community involvement, empowering processes at the organizational level might include shared leadership and decision making, and empowering processes at the community level might include accessible government, media, and other community resources.
Empowerment theory processes contribute to specific outcomes. Linking the processes to outcomes helps groups specify their chain of reasoning. Zimmerman (2000) provides additional insight into the outcome level of analysis to further explicate empowerment theory:
Empowerment outcomes refer to operationalization of empowerment so we can study the consequences of citizen attempts to gain greater control in their community or the effects of interventions designed to empower participants. Empowered outcomes also differ across levels of analysis. When we are concerned with individuals, outcomes might include situation-specific perceived control, skills, and proactive behaviors. When we are studying organizations, outcomes might include organizational networks, effective resource acquisition, and policy leverage. When we are concerned with community level empowerment, outcomes might include evidence of pluralism, the existence of organizational coalitions, and accessible community resources.
Zimmerman’s (2000) characterization of the community psychologist’s role in empowerment activities is similar to the role of the empowerment evaluator.
An empowerment approach to intervention design, implementation, and evaluation redefines the professional’s role relationship with the target population. The professional’s role becomes one of collaborator and facilitator rather than expert and counselor. As collaborators, professionals learn about the participants through their cultures, their worldviews, and their life struggles. The professional works with participants instead of advocating for them. The professional’s skills, interest, or plans are not imposed on the community; rather, professionals become a resource for a community. This role relationship suggests that what professionals do will depend on the particular place and people with whom they are working, rather than on the technologies that are predetermined to be applied in all situations.
Additional literature on empowerment theory is provided by Zimmerman (2000); Zimmerman, Israel, Schulz, and Checkoway (1992); Zimmerman and Rappaport (1988); and Dunst, Trivette, and LaPointe (1992).
Self-determination is defined as the ability to chart one’s own course in life. Mithaug’s work in this area is typically referred to as self-regulation theory, with self-determination as a guiding concept in his work. For the purposes of clarity and as it relates to instructing empowerment evaluation, self-determination is being used as the umbrella term in this discussion. Self-determination consists of numerous interconnected capabilities, such as the ability to identify and express needs; establish goals or expectations and a plan of action to achieve them; identify resources; make rational choices from various alternative courses of action; take appropriate steps to pursue objectives; evaluate short- and long-term results, including reassessing plans and expectations and taking necessary detours; and persist in the pursuit of those goals. A breakdown at any juncture of this network of capabilities—as well as various environmental factors—can reduce a person’s likelihood of being self-determined (see also Bandura, 1982 for more detail on issues related to self-efficacy and self-determination). Dennis Mithaug’s (1991, 1993) has provided extensive work on the topic, focusing on individuals with disabilities. Fetterman and Mithaug’s Department of Education work on self-determination and individuals with disabilities provided additional clarity to the concept. Self-determination mechanisms help program staff members and participants implement an empowerment evaluation.
Process use assumes that the more that people conduct their own evaluation the more they own them. The greater the sense of ownership the more likely people are to consider their findings credible and act on their own recommendations. Empowerment evaluation places evaluation in the hands of community and staff members to facilitate ownership, enhance credibility, and promote action. In addition, a byproduct of this experience is that people learn to think evaluatively (Patton, 2002). This makes them more likely to make decisions and take actions based on their evaluation data.
Theories of Action and Use
Theories that enable comparisons between action and use are essential. Empowerment evaluation relies on the reciprocal relationship between theories of action and use at every step in the process.
A theory of action is the espoused operating theory about how a program or organization works. This theory of action is compared with a theory of use. The theory of use is the actual program reality, the observable behavior of stakeholders (see Argyris & Schon, 1978; Patton, 1997b).
People engaged in empowerment evaluations create a theory of action and test it against the existing theory of use. Because empowerment evaluation is an ongoing and iterative process, stakeholders test their theories of action against theories of use during various microcycles to determine whether their strategies are being implemented as recommended or designed. These theories are used to identify gross differences between the ideal and the real. For example, communities of empowerment evaluation practice compare their theory of action with their theory of use to determine whether they are even pointing in the same direction. Three common patterns that emerge from this comparison include in alignment, out of alignment, and alignment in conflict. In alignment is when the two theories are parallel or pointed in the same direction. They may be distant or close levels of alignment, but they are on the same general track. Out of alignment occurs when actual practice is divergent from the espoused theory of how things are supposed to work. The theory of use is not simply distant or closely aligned, but actually off target or at least pointed in another direction. Alignment in conflict occurs when the theory of action and use are pointed in diametrically opposite directions. This signals a group or organization in serious trouble or self-denial.
After making the first-level comparison, a gross indicator, to determine whether the theories of action and use are even remotely related to each other, communities of empowerment evaluation practice compare their theory of action with their theory of use in an effort to reduce the gap between them. This assumes they are at least pointed in the same direction. The ideal progression is from distant alignment to close alignment between the two theories. This is the conceptual space where most communities of empowerment evaluation practice strive to accomplish their goals as they close the gap between the theories. The process of empowerment embraces the tension between the two types of theories and offers a means for reconciling incongruities.
Empowerment evaluation is guided by 10 specific principles (Fetterman and Wandersman, 2005, pp. 1-2, 27-41,42-72). They include:
Improvement – empowerment evaluation is designed to help people improve program performance; it is designed to help people build on their successes and re-evaluate areas meriting attention
Community ownership – empowerment evaluation values and facilitates community control; use and sustainability are dependent on a sense of ownership
Inclusion – empowerment evaluation invites involvement, participation, and diversity; contributions come from all levels and walks of life
Democratic participation – participation and decision making should be open and fair
Social justice – evaluation can and should be used to address social inequities in society
Community knowledge – empowerment evaluation respects and values community knowledge
Evidence-based strategies – empowerment evaluation respects and uses the knowledge base of scholars (in conjunction with community knowledge)
Capacity building – empowerment evaluation is designed to enhance stakeholders’ ability to conduct an evaluation and to improve program planning and implementation
Organizational learning – data should be used to evaluate new practices, inform decision making, and implement program practices; empowerment evaluation is used to help organizations learn from their experience (building on successes, learning from mistakes, and making mid-course corrections)
Accountability – empowerment evaluation is focused on outcomes and accountability; empowerment evaluations functions within the context of existing policies, standards, and measures of accountability; empowerment evaluations ask: did the program accomplish its objectives?
Empowerment evaluation principles help evaluators and community members make decisions that are in alignment with the larger purpose or goals associated with capacity building and self-determination. The principles of inclusion, for example, reminds evaluators and community members to include rather than exclude members of the community, even though fiscal, logistic, and personality factors might suggest otherwise. The capacity building principle reminds the evaluator to provide community members with the opportunity to collect their own data, even though it might initially be faster and easier for the evaluator to collect the same information. The accountability principle guides community members to hold one another accountable. It also situates the evaluation with the context of external requirements and credible results or outcomes. (See Fetterman (2005), p. 2.)
Key concepts guiding empowerment evaluation include:
Critical friend: A critical friend is an evaluator who facilitates the process and steps of empowerment evaluation. They believe in the purpose of the program but provide constructive feedback. They help to ensure the evaluation remains organized, rigorous, and honest.
Culture of evidence: Empowerment evaluators help cultivate a culture of evidence by asking people why they believe what they believe. Community members and program participants are asked for evidence or documentation at every stage so that it becomes normal and expected to have data to support one’s opinions and views.
Cycles of reflection and action: These involve ongoing phases of analysis, decision-making, and implementation, (based on evaluation findings). It is a cyclical process. Programs are dynamic, not static, and require continual feedback as they change and evolve. Empowerment evaluation is successful when it is institutionalized and becomes a normal part of the planning and management of the program.
Community of learners: Empowerment evaluation is driven by a group process. The group learns from each other, serving as their own peer review group, critical friend, resource, and norming mechanism. Individual members of the group hold each other accountable concerning progress toward stated goals.
Reflective practitioners: Reflective practitioners use data to inform their decisions and actions concerning their own daily activities. This produces a self-aware and self-actualized individual who has the capacity to apply this world-view to all aspects of their life. As individuals develop and enhance their own capacity, they improve the quality of the group’s exchange, deliberation, and action plans.
There are many ways in which to implement an empowerment evaluation. In fact, empowerment evaluation has accumulated a warehouse of useful tools. The three-step (Fetterman, 2001) and ten-step (Chinman, Imm, and Wandersman, 2004) approaches to empowerment evaluation are the most popular tools in the collection. The three-step approach includes helping a group: 1) establish their mission; 2) take stock of their current status; and 3) plan for the future. The popularity of this particular approach is in part a result of its simplicity, effectiveness, and transparency.
1) Establishing the mission
The group comes to a consensus concerning their mission or values. This gives them a shared vision of what’s important to them and where they want to go. The empowerment evaluator facilitates this process by asking participants to generate statements that reflect their mission. These phrases are recorded on a poster sheet of paper (and may be projected on an LCD projector depending on the technology available). These phrases are used to draft a mission statement (crafted by a member of the group and the empowerment evaluator). The draft is circulated among the group. They are asked to “approve” it and/or suggest specific changes in wording as needed. A consensus about the mission statement helps the group think clearly about their self-assessment and plans for the future. It anchors the group in common values.
2) Taking stock
After coming to a consensus about the mission, the group evaluates their efforts (within the context of a set of shared values).
First, the empowerment evaluator helps members of the group generate a list of the most important activities required to accomplish organizational or programmatic goals. The empowerment evaluator gives each participant five dot stickers, and asks the participants to place them by the activities they think are the most important to accomplish programmatic and organizational goals (and thus the most important to evaluate as a group from that point on). They can put one sticker on five different activities or all five on one activity if they are concerned that activity will not get enough votes. The top 10 items with the most dots represent the results of the prioritization part of taking stock.
The 10 activities represent the heart of part two of taking stock: rating. The empowerment evaluator asks participants in the group to rate how well they are doing concerning each of the activities selected, using a 1 (low) to 10 (high) scale. The columns are averaged horizontally and vertically. Vertically, the group can see who is typically optimistic and/or pessimistic. This helps the group calibrate or evaluate the ratings and opinions of each individual member. It helps the group establish norms. Horizontally, the averages provide the group with a consolidated view of how well (or poorly) things are going. The empowerment evaluator facilitates a discussion and dialogue about the ratings, asking participants why they gave a certain activity a 3 or 7.
The dialogue about the ratings is one of the most important parts of the process. In addition to clarifying issues, evidence is used to support viewpoints and “sacred cows” are surfaced and examined during dialogue. Moreover, the process of specifying the reason or evidence for a rating provides the group with a more efficient and focused manner of identifying what needs to be done next, during the planning for the future step of the process. Instead of generating an unwieldy list of strategies and solutions that may or may not be relevant to the issues at hand, the group can focus its energies on the specific concerns and reasons for a low rating that were raised in the dialogue or exchange.
3) Planning for the future
Many evaluations conclude at the taking stock phase. However, taking stock is a baseline and a launching off point for the rest of the evaluation. After rating and discussing programmatic activities it is important to do something about the findings. It is time to plan for the future. This step involves generating goals, strategies, and credible evidence, (to determine if the strategies are being implemented and if they are effective). The goals are directly related to the activities selected in the taking stock step. For example, if communication was selected, rated, and discussed, then communication (or improving communication) should be one of the goals. The strategies emerge from the taking stock discussion, as well, as noted earlier. For example, if communication received a low rating and one of the reasons was because the group never had agendas for their meetings, then preparing agendas might become a recommended strategy in the planning for the future exercise.
Monitoring the strategies
Many programs, projects, and evaluations fail at this stage for lack of individual and group accountability. Individuals, who spoke eloquently and/or emotionally, about a certain topic should be asked to volunteer to lead specific task forces to respond to identified problems or concerns. They do not have to complete the task. However, they are responsible for taking the lead in a circumscribed area (a specific goal) and reporting the status of the effort periodically at ongoing management meetings. Similarly, the group should make a commitment to reviewing the status of these new strategies as a group (and be willing to make mid-course corrections if they are not working). Conventional and innovative evaluation tools are used to monitor the strategies, including online surveys, focus groups, interviews, as well as the use of a quasi-experimental design (if appropriate). In addition, program specific metrics are developed, using baselines, benchmarks or milestones, and goals (as deemed useful and appropriate). For example, a minority tobacco prevention program empowerment evaluation in Arkansas has established:
Baselines (the number of people using tobacco in their community)
Goals (the number of people they plan to help stop using tobacco by the end of the year)
Benchmarks or Milestones (the number of people they expect to help stop using tobacco each month)
Actual Performance (they record the number of people they help to stop using tobacco and compare their figures with their goals and benchmarks to determine if they are making progress or need assistance)
These metrics are used to help a community monitor program implementation efforts and enable program staff and community members to make mid-course corrections and substitute ineffective strategies for potentially more effective ones as needed. These data are also invaluable when the group conducts a second taking stock exercise (3 to 6 months later) to determine if they are making progress towards their desired goals and objectives. Additional metrics enable community members to compare, for example, their baseline assessments with their benchmarks or expected points of progress, as well as their goals. The ten-step approach is another useful tool and described in detail in Chinman, Imm, and Wandersman (2004).
Issues in empowerment evaluation
A sample of issues concerning the use of empowerment evaluation is presented below to provide additional conceptual clarity. They range from the locus of control to the audience for empowerment evaluation and are briefly discussed below.
Empowerment. People empower themselves. A common misconception about empowerment evaluation is that it empowers either individuals or groups. Empowerment evaluation can’t empower anyone. Empowerment evaluation simply provides the tools and environment conducive to empowering oneself.
Objectivity and Advocacy. Empowerment evaluation is transparent, brings bias to the surface, and generates meaningful data to inform decision making. These findings are used by staff, community members, and other relevant parties to advocate for their programs or communities as data merit.
Consumer Focus. Consumers (community members, program staff, and participants) are the driving force or focus in empowerment evaluation. However, evaluators and donors remain an integral part of empowerment evaluation
Internal versus External Evaluation. Empowerment evaluation (internal evaluation) and traditional forms of evaluation (typically external evaluation) can be mutually reinforcing. They are not mutually exclusive. However, external evaluations should be rooted in internal concerns otherwise they may divert program staff, participants, and resources from the most relevant issues given the organization’s stage of development.
Purpose. Empowerment evaluation’s most significant contribution is to program or community development. However, it makes a strong contribution to accountability by cultivating internal accountability. This contribution remains long after an episodic and often anticipated external examination.
Bias. Empowerment evaluations are typically more critical of their own programs than external examinations (which are often influenced by their interest in returning for a follow-up or extended engagement). Empowerment evaluations provide people with a window of opportunity to address long-standing issues of dysfunction and inefficiency in their own organizations. In addition, the process is inclusive and transparent, open to critique and review, and makes it difficult to keep people from publically “speaking their truth”.
Outcomes. Empowerment evaluations are highly collaborative and participatory in nature. However, the bottom line remains: did you accomplish the desired results? Empowerment evaluations are conducted within the context of what people are already being held accountable for in their communities or workplaces. This makes the entire process more credible and authentic.
Audience. Empowerment evaluators believe that everyone can benefit from being more empowered. Although much of empowerment evaluation’s focus has been on disenfranchised populations, empowerment evaluators assume all people can benefit from taking greater control of their lives, particularly from a positive psychological growth perspective.
Additional points providing greater conceptual clarity, as well as additional discussion about methodological specificity and outcomes, are discussed in detail in the literature (Fetterman, 2001; Fetterman and Wandersman, 2005; Fetterman and Wandersman, 2007). Christie (2003) has also revealed a unique contribution empowerment evaluation has made to the field and how to differentiate its contribution from similar approaches (see also Fetterman, 2003b).
- David Fetterman - AEA Ignite
AEA Ignite talk about using an empowerment evaluation engine to race towards social justice at the 2011 annual AEA conference in Anaheim, CA. [Video]
- Empowerment Evaluation: Knowledge and Tools for Self-Assessment, Evaluation Capacity Building, and Accountability [2nd …
This Second Edition celebrates 21 years of the practice of empowerment evaluation and includes evaluators from academia, government, nonprofits, and foundations assessing how empowerment evaluation has been used in practice since the publication of the landmark 1996 edition.
- Empowerment Evaluation Principles in Practice
In this book, David Fetterman and Abraham Wandersman present the most current formulation of the 10 principles of EE and provides professionals and students with the tools to put these principles into practice.
- Foundations of Empowerment Evaluation
In this book, David Fetterman explores its background and theory and goes on to present the three steps of empowerment evaluation: establishing a mission statement about a program; taking stock; and charting a course for the future, while using case studies to highlight these steps in practice.
- Empowerment Evaluation: A Collaborative Approach to Evaluating and Transforming a Medical School Curriculum
This article describes the use of empowerment evaluation to help Stanford University’s School of Medicine prepare for and pass an accreditation review.
- Empowerment evaluation at the Stanford University School of Medicine: Using a Critical Friend to Improve the Clerkship …
This article highlights one of the most important features of an empowerment evaluation: a critical friend. While examining this large-scale, multi-site case, Fetterman highlights the potential for empowerment evaluation to build local capacity and sustain improvements within communities.
- Empowerment Evaluation in the Digital Villages: Hewlett-Packard's $15 Million Race Toward Social Justice
This book analyzes a $15 million community change initiative designed to bridge the digital divide in East Palo Alto, East Baltimore, and San Diego.
- Empowerment Evaluation in the Digital Villages
This article in the Stanford Social Innovations Review is a compilation of excerpts from the book of the same name. [Above]
- CSONIC Empowerment Evaluation - NSF Sponsored Initiative
This video of an empowerment evaluation exercise used to facilitate an NSF funded computer science education evaluation initiative. It was used specifically to help them assess their efforts and move their efforts forward, including building a repository of STEM and CS evaluation tools and instruments. [Video]
- Making It Count: LAUC0B 2013 Conference. Closing Keynote
This keynote speech from David Fetterman, given to the Librarians Association of the University of California, Berkeley, provides a number of case examples on the use of Empowerment Evaluation. [Video]
- Transformative Empowerment Evaluation and Freireian Pedagogy: Alignment with an Emancipatory Tradition
This article by David Fetterman discusses how empowerment evaluation and Freirean pedagogy share a common emancipatory tradition in that these approaches help people learn to confront the status quo, by questioning assumptions and prescribed roles, unpacking myths, rejecting dehumanization, and no longer blindly accepting the “truth” about how things are or can be.
- Empowerment Evaluation: Yesterday, Today, and Tomorrow
A highly attended American Evaluation Association conference panel, titled “Empowerment Evaluation and Traditional Evaluation: 10 Years Later,” provided an opportunity to reflect on the evolution of empowerment evaluation.
- Empowerment Evaluation's 21st Anniversary: A Celebration, Comment & Critique
This is a special topic edition of Evaluation and Program Planning. It includes prominent critical friends' comments about the 21st anniversary of the empowerment evaluation. This special edition includes the following highlights:
- Empowerment evaluation
This Wikipedia entry gives a concise overview of empowerment evaluation.
Alkin, M. And Christie, C. (2004). An evaluation theory tree. In M. Alkin (Ed.), Evaluation roots: tracing theorists’ views and influences (pp. 381-392). Thousand Oaks, CA: Sage.
Altman, D. (1997). Review of the book Empowerment Evaluation: Knowledge and Tools for Self-assessment and Accountability. Community Psychologist, 30(4), 16-17. Retrieved from http://www.davidfetterman.com/AltmanBookReview.htm (archived link)
Argyris, C., & Schon, D. A. (1978). Organizational learning: A theory of action perspective. Reading, MS: Addison-Wesley.
Bandura, A. (1982). Self-efficacy mechanism in human agency. American Psychologist, 37, 122-147.
Brown, J. (1997). Review of the book Empowerment Evaluation: Knowledge and Tools for Self-assessment and Accountability. Health Education & Behavior, 24(3), 388-391. Retrieved from: http://www.davidfetterman.com/BrownBookReview1.htm (archive link)
Chelimsky, E., & Shadish, W. R. (1997). Evaluation for the 21st century: A handbook. Thousand Oaks, CA: Sage.
Chinman, M., Imm, P., and Wandersman, A. (2004). Getting To Outcomes: Promoting Accountability Through Methods and Tools for Planning, Implementation, and Evaluation. Santa Monica, CA: RAND Corporation http://www.rand.org/pubs/technical_reports/TR101/
Christie, C. A. (2003, Spring). What guides evaluation? A study of how evaluation practice maps onto evaluation theory. In C. A. Christie (Ed.), The practice-theory relationship in evaluation (No. 97). San Francisco: Jossey-Bass.
Cousins, B. (2005). Will the real empowerment evaluation please stand up? A critical friend perspective. In Fetterman, D. M. and Wandersman, A. (2005). Empowerment evaluation principles in practice. (pp. 183-208). New York: Guilford.
Donaldson, S. (2005). Book Review of Empowerment Evaluation Principles in Practice. Amazon: http://www.amazon.ca/Empowerment-Evaluation-Principles-Practice-Fetterma...
Dunst, C. J., Trivette, C. M., & LaPointe, N. (1992). Toward clarification of the meaning and key elements of empowerment.
Fetterman, D. M. (1982). Ibsen’s baths: Reactivity and insensitivity (A misapplication of the treatment-control design in a national evaluation). Educational Evaluation and Policy Analysis, 4(3), 261-279. Available at http://www.stanford.edu/ ~davidf/class/Ibsen.htm.
Fetterman, D. M. (1984a). Ethnography in educational evaluation. Beverly Hills, CA: Sage.
Fetterman, D. M. (1984b). Guilty knowledge, dirty hands, and other ethical dilemmas: The hazards of contract research. In R. F. Conner (Ed.), Evaluation studies review annual (Vol. 9). Beverly Hills, CA: Sage.
Fetterman, D. M. (1986). Operational auditing in a teaching hospital: A cultural approach. Internal Auditor, 43(2), 48-54.
Fetterman, D. M. (Ed.). (1987). Perennial issues in qualitative research. Education and Urban Society, 20(1).
Fetterman, D. M. (Ed.). (1988). Qualitative approaches to evaluation in education: The silent scientific revolution. Albany: SUNY Press.
Fetterman, D. M. (1989). Ethnography: Step by step. Newbury Park, CA: Sage.
Fetterman, D. M. (1990). Ethnographic auditing. In W. G. Tierney (Ed.), Assessing academic climates and cultures: New directions for institutional research. San Francisco: Jossey-Bass.
Fetterman, D. M. (Ed.). (1991). Using qualitative research in institutional research. San Francisco: Jossey-Bass.
Fetterman, D.M. (1994a). Empowerment evaluation. Evaluation Practice, 15(1):1-15/
Fetterman, D. M. (1994b). Ethnographic evaluation in education. In T. Husen and T. N. Postlethwaite (Eds.), The international encyclopedia of education. Oxford, England: Pergamon.
Fetterman, D.M. (1995). In Response to Dr. Daniel Stufflebeam’s” Empowerment evaluation, objectivist evaluation, and evaluation standards: where the future of evaluation should not go, where it needs to go, October, 1994, 321-338. American Journal of Evaluation, 16:179-199. Retrieved from: http://www.davidfetterman.com/dfresponsetostufflebeam.pdf (archive link)
Fetterman, D. M. (1996a). Ethnography in the virtual classroom. Practicing Anthropology, 18(3), 2, 36-39.
Fetterman, D. M. (1996b). Videoconferencing on-line: Enhancing communication over the Internet. Educational Researcher, 25(4), 23-27.
Fetterman, D.M. (1997a). Empowerment evaluation: a response to Patton and Scriven. American Journal of Evaluation, 18:253-266. Retrieved from: http://www.davidfetterman.com/Fettermanresponsepattonscriven.pdf (archive link)
Fetterman, D. M. (1997b). Ethnography. In L. Bickman & Rog, D. (Eds.), Handbook of applied social research methods. Thousand Oaks, CA: Sage.
Fetterman, D. M. (1998a). Ethnography: Step by step. (2nd ed.). Thousand Oaks, CA: Sage.
Fetterman, D. M. (1998b). Teaching in the virtual classroom at Stanford University. The Technology Source
Fetterman, D. M. (1998c). Webs of meaning: Computer and Internet resources for educational research and instruction. Educational Researcher, 27(3), 22-30.
Fetterman, D. M. (2001). Foundations of empowerment evaluation. Thousand Oaks, CA: Sage.
Fetterman, D. M. (2003a). Ethnography. In M. Lewis-Beck, A. Bryman, & T. Futing Liao (Eds.), Encyclopedia of social science research. Thousand Oaks, CA: Sage.
Fetterman, D. M. (2003b). Fetterman-House: A process use distinction and a theory. New Directions for Evaluation (Vol. 97). San Francisco: Jossey-Bass.
Fetterman, D.M. (2009). Empowerment evaluation at the Stanford University School of Medicine: using a critical friend to improve the clerkship experience. Ensaio: aval. Pol. Publ. Educ., Rio de Janeiro, 17(63):197-204.
Fetterman, D.M. (2010). Ethnography: Step by Step (3rd edition). Thousand Oaks, CA: Sage.
Fetterman, D.M. (2011). Empowerment evaluation and accreditation case examples: California Institute of Integral Studies and Stanford University. In Secolsky, C. (ed.) Measurement and Evaluation in Higher Education. London: Routledge
Fetterman, D.M. (2013). Empowerment Evaluation: Learning to Think Like an Evaluator. In Alkin, M.C. (ed.) Evaluation Roots: a wider perspective of theorists’ views and influences (second edition). Thousand Oaks, CA: Sage.
Fetterman, D.M. (2015). Empowerment Evaluation and Action Research: A Convergence of Values, Principles, and Purpose. In Bradbury, H. (ed.) The Handbook of Action Research. Thousand Oaks, CA: Sage.
Fetterman, D.M., Deitz, J., and Gesundheit, N. (2010). Empowerment evaluation: a collaborative approach to evaluating and transforming a medical school curriculum. Academic Medicine (85), 5:813-820.
Fetterman, D. M., Kaftarian, S., & Wandersman, A. (1996). Empowerment evaluation: Knowledge and tools for self-assessment and accountability. Thousand Oaks, CA: Sage.
Fetterman, D. M., Pitman, M. A. (1986). Educational evaluation: Ethnography in theory, practice, and politics. Beverly Hills, CA: Sage.
Fetterman, D. M. and Wandersman, A. (2005). Empowerment evaluation principles in practice. New York: Guilford Publications.
Fetterman, D.M. and Wandersman, A. (2007). Empowerment evaluation: yesterday, today, and tomorrow. American Journal of Evaluation, 28(2):179-198.
House, E. R., & Howe, K. R. (Eds.). (2000). Deliberative democratic evaluation. New Directions for Evaluation (Vol. 85). San Francisco: Jossey-Bass.
Mithaug, D. E. (1991). Self-determined kids: Raising satisfied and successful children. New York: Macmillan.
Mithaug, D. E. (1993). Self-regulation theory: How optimal adjustment maximizes gain. New York: Praeger.
Patton, M.Q. (1997a). Toward distinguishing empowerment evaluation and placing it in a larger context. Evaluation Practice, 15(3), 311-320. Available on-line at: http://www.davidfetterman.com/pattonbkreview1997.pdf (no longer available)
Patton, M. Q. (1997b). Utilization-focused evaluation: The new century text (3rd ed.). Thousand Oaks, CA: Sage.
Patton, M. Q. (1980). Qualitative evaluation methods. Beverly Hills, CA: Sage.
Patton, M.Q. (2002). Qualitative research and evaluation methods. Thousand Oaks, CA: Sage.
Patton, M.Q. (2005). Toward distinguishing empowerment evaluation and placing it in a larger context: Take two. American Journal of Evaluation, 26, 408-414.
Sechrest, L. (1997). [Review of the book Empowerment evaluation: Knowledge and tools for self-assessment and accountability.] Environment and Behavior, 29(3), 422-426. Retrieved from: http://www.davidfetterman.com/SechrestBookReview.htm (no longer available)
Scriven, M. (1997). Empowerment evaluation examined. Evaluation Practice, 18(2), 165-175. Available online at http://www.davidfetterman.com/scrivenbkreview1997.pdf (archive link)
Scriven, M. (2005). [Review of the book: Empowerment Evaluation Principles in Practice.] American Journal of Evaluation, 26(3), 415-417.
Stufflebeam, D. (1994). Empowerment evaluation, objectivist evaluation, and evaluation standards: Where the future of evaluation should not go and where it needs to go. Evaluation Practice, 15(3), 321-338. Retrieved from: http://www.davidfetterman.com/stufflebeambkreview.pdf (archive link)
Wandersman, A., Snell-Johns, J., Lentz, B., Fetterman, D.M., Keener, D.C., Livet, M., Imm, P.S., and Flaspohler, P. (2005). The Principles of Empowerment Evaluation. In Fetterman, D.M. and Wandersman, A. (2005). Empowerment evaluation principles in practice. New York: Guilford Publications, pp. 27.
Wild, T. (1997). [Review of Empowerment evaluation: Knowledge and tools for self-assessment and accountability.] Canadian Journal of Program Evaluation, 11(2), 170-172. Available online at http://www.davidfetterman.com/WildBookReview.htm (archive link)
Zimmerman, M. A. (2000). Empowerment theory: Psychological, organizational, and community levels of analysis. In J. Rappaport & E. Seldman (Eds.), Handbook of community psychology (pp. 2-45). New York: Kluwer Academic/Plenum.
Zimmerman, M. A., Israel, B. A., Schulz, A., & Checkoway, B. (1992). Further explorations in empowerment theory: An empirical analysis of psychological empowerment. American Journal of Community Psychology, 20(6), 707-727.
Zimmerman, M. A., & Rappaport, J. (1988). Citizen participation, perceived control, and psychological empowerment. American Journal of Community Psychology, 16(5), 725-750.
'Empowerment evaluation' is referenced in: