Primary tabs

Show search hints
Did you mean
results make

Search results

  1. Prioritise self-determination, community agency and self-governance


    Empowerment Principle

    Aboriginal and Torres Strait Islander peoples have the right to self-determination and to be encouraged and empowered in decision-making processes. Evaluators must listen and advise to the benefit of communities above all else.

    Click here to find out how to put this into practice


    This involves time, ongoing negotiation, consultation and informing custodians about the implications of participating in the evaluation.

    Seek what is important and what needs to be evaluated from the community – take a ground up perspective to understand community priorities. Even top down projects should include community-led evaluation, together with what funders want to know.

    Ensure you have an established relationship with a community before you commence the evaluation. If you do not have an established relationship, consider partnering with someone who does.

    Before commencing the evaluation, ensure communities have a full understanding of the intent of the evaluation and that their input is valued and welcomed. Discuss and identify how the evaluation will benefit them, including the ownership of data.

    Include community members in the co-design phase of the evaluation.  Accept that you may need to return to the evaluation commissioner with a revised approach.

    In consultation with community members, choose the most appropriate method(s) to collect and/or retrieve data. For a list of examples refer to the BE Rainbow Framework [link here].

    Include community members in the collection and retrieval of data and analysis of numeric and textual data patterns.

    Build community capacity and if needed capability to engage in data collection in ways that are meaningful to the community.

    Diversity Principle

    Recognise the diversity and uniqueness of First Nations Cultures, Peoples and Individuals.

    Click here to find out how to put this into practice


    Recognise the unique cultural identities of each Aboriginal and or Torres Strait Islander community and their unique practices and processes. Take the time to look, listen and learn about the specific cultural context you are entering before commencing any evaluation.

    Recognition of individual communities requires explicitly sharing and explaining individual Indigenous cultural materials rather than perpetuating a homogenous view of Aboriginal and or Torres Strait Islander peoples.

    Stereotypes and quietly held beliefs require unpacking. Put aside assumptions about communities and their governance structures before and during the evaluation. Be aware that your cultural lens will impact on your understanding.

    Understand that there are different governance structures and roles within a community. At the outset of, and throughout the evaluation, make sure you are talking to the right person or people.

    When synthesising data from other evaluations, be aware that findings from other communities may not be relevant. Aboriginal and or Torres Strait Islander communities represent over 300 different nations. This includes differences in language, culture and societal structure. Homogenising data can skew results.

    If utilising findings from different communities, validate this first with the community that is central to the evaluation. 

    There may be opportunities to generalise findings across programs or sites within the community that is central to the evaluation. Ensure you test and validate this approach with the community. There may be different language groups and cultures represented within one site.

    Inclusion Principle

    Involve Aboriginal and or Torres Strait Islander people in all levels of the evaluation, from design phase right through to analysis and communicating findings.

    Click here to find out how to put this into practice


    Build trust with participants by demonstrating to them the value and benefit of the information they will share as part of the evaluation.

    Prioritise the use of participatory methods as they have benefits to achieving a higher ethical standard on inclusion and can allow you to build relationships and trust with community members, particularly where multiple monitoring points provide an ongoing connection.

    Consider if there are alternative explanations for causes and ensure causal questions are directed to community members. They are the experts of their communities and may see something you have missed.

    When understanding what may have happened without the program being evaluated, ensure that the community can critique any assumptions you have made. Do not use the word ‘intervention’ when questioning causal attribution. The word ‘intervention’ can have other connotations for community.

    Prioritise self-determination, community agency and self-governance Please look at all the other themes – they are equally important Key themes Barriers Limited scope from commissioner Power and privilege Limited cultural understanding Can’t find community protocols Time restrictions No cultural mentor No tools or templates Data gate keeping Resource constraints Communicate transparently, build trust and obtain individual and community consent Strengths-based recognition of cultures, acknowledging communities and individuals Share benefits and apply two-way learning Formalise accountability processes on ethical practice Facilitate control and data sovereignty COURAGE, INTEGRITY, & CULTURAL HUMILITY


    We would like to acknowledge and thank Maria Stephens, an Arrabi/Binning woman who speaks the Iwaidja language. She generously provided her artwork for this page.

  2. Why do programs benefit from developing monitoring and evaluation frameworks?

    10th January, 2018

    This guest blog is by Anne Markiewicz, Director of Anne Markiewicz and Associates, a consultancy that specialises in developing Monitoring and Evaluation Frameworks. Anne is the co-author, with Ian Patrick, of the text book ‘Developing Monitoring and Evaluation Frameworks’ (Sage 2016). She has extensive experience in the design and implementation of monitoring and evaluation frameworks for a wide range of different initiatives, building the capacity of organisations to plan for monitoring and evaluation.

  3. Expert Panel

    Evaluation Option
    Freedom of expression panel photo by Kieren McCarthy

    Expert panels are used when specialized input and opinion is required for an evaluation. Generally, a variety of experts are engaged based on various fields of expertise to debate and discuss various courses of action and make recommendations. They can be useful at different stages of an evaluation and can take place live, which poses logistical challenges if experts are busy or widespread, or remotely, as in the case of the Delphi Technique

  4. Cost Utility Analysis

    Evaluation Option

    Cost Utility Analysis (CUA) is useful for evaluating, and comparing, programs that aim to reach the same goal in non monetary terms. CUS develops an overall measure of utility or value based on the preferences of individuals. Well-known application of cost utility analysis is in the health sector, with the use of Quality Adjusted Life Years (QALYs). The QALY allows each potential program to be measured according to the extent to which it extends life expectancy while also improving the quality of each year lived. Developing this indicator involves determining satisfaction derived from different health states. 

  5. Citizen Juries

    Citizen Panels
    Evaluation Option

    Citizen Juries use representatives from the wider community who have no formal alignments or allegiances. They ensure community involvement in the decision making process by engaging citizens in the discussion of possible approaches or options.

  6. Learning Purposefully in Capacity Development (2008)


    This paper examines how monitoring and evaluation (M&E) does, or could, make a difference to Capacity Dvelopment (CD). It explores whether there is something different or unique about M&E of CD that isn’t addressed by predominant options and ways of thinking about M&E, and which might be better addressed by experimenting with learning-based approaches to M&E of CD.

  7. Use measures, indicators or metrics


    As part of an evaluation, it is often important to either develop or use existing indicators or measures of implementation and/or results.

  8. Demonstrating Outcomes and Impact across Different Scales

    Discussion paper

    This research report from the Research for Development Impact Network demonstrates how evidence of outcomes and impact can be better captured, integrated and reported on across different scales of work for Australian NGOs working in international development. The report looks at some of the different methods available to reporting at different scales ‘beyond the project’.

  9. Theory of Change Software

    TOC Software,
    Evaluation Option

    There are a number of options when it comes to using software to help create a logic model. These range from generic word processing tools (Word, Powerpoint, or their Google Doc or Mac equivilants), to software that has been specifically tailored for visualising Theories of Change, like TOCO or Miradi. You should consider what resources you have to invest in software, both in terms of cost and in time to learn and use the features. If you only have a short timeframe and have simple needs, then a basic tool may suit you better than some of the more complex software available. It's important to investigate a few options and see what is going to be best for you.

  10. Attributing Development Impact: The Qualitative Impact Protocol (QuIP) Case Book


    This freely available, online book brings together case studies using an impact evaluation approach, the Qualitative Impact Protocol (QUIP), without a control group that uses narrative causal statements elicited directly from intended project beneficiaries. The QuIP has now been used in many countries, and this book uses case studies from seven countries (Ethiopia, India, Malawi, Mexico, Tanzania, Uganda and UK) assessing a range of activities, including food security, rural livelihoods, factory working conditions, medical training, community empowerment and microcredit for house improvement. It includes comprehensive ‘how to’ QuIP guidelines and practical insights based on these case studies into how to address the numerous methodological challenges thrown up by impact evaluation.