This workshop will focus on how to design rubrics to make evaluative reasoning explicit and enable practitioners to transparently draw evaluative conclusions. A rubric is a tool that outlines the criteria and standards used to judge performance. It is a matrix that outlines how evaluative determinations are made, which can enhance stakeholder buy in.
Workshop participants will learn to develop and implement an evaluation rubric to synthesise evidence and value to support defensible and transparent evaluative conclusions. In the workshop, rubrics will be decomposed to understand how the tool embodies the logic of evaluation. Students will explore ways that these elements can be designed to fit need and purpose. They will also learn how to recognise if a good rubric has been designed and implemented—one that can assist in drawing evaluative conclusions and promotes understanding and evaluation use.
Explicating the reasoning used to draw evaluative determinations and conclusions can be argued to be a core evaluator competency. While evaluators may accept the centrality of the need to explicate evaluative reasoning, evaluation practitioners are still grappling with how to intentionally design evaluations to provide transparent and explicit evaluative conclusions that promote understanding and evaluation use. Rubrics are an evaluation-specific methodology that can assist evaluators in the fundamental task of systematically determining merit because as tools their core form (characteristics and configuration) and function (the natural purpose) embodies the very nature and logic of evaluation.
This workshop aligns with competencies in the AES Evaluator’s Professional Learning Competency Framework . The identified domains are
- Domain 1 – Evaluative attitude and professional practice: by forwarding self-awareness through promotion of transparency in the evaluation process.
- Domain 2 – Evaluation theory (Theoretical Foundations, Theoretical Foundations, Evaluative Knowledge, Theory, and Reasoning): by furthering knowledge of the logic of evaluation, evaluative actions, and synthesis methodologies.
- Domain 3 – Culture, stakeholders and context: by developing criteria and applying standards in a way that is sensitive to cultural context.
- Domain 4 – Research methods and systematic inquiry: by interpreting evidence through a systematic and transparent synthesis method.
Who should attend
This workshop caters to beginner and intermediate level commissioners and coordinators of evaluation. Novice rubric users and those seeking further technical knowledge are welcome.
Workshop outcomes / objectives
It is expected that by the end of the workshop participants will be able to:
- use a rubric to draw evaluative conclusions
- analyse the various elements that comprise a rubric to build a rubric that is defensible and fit to purpose
- gain a basic awareness of reliability issues and skills needed to calibrate multiple rubric users/uses
The workshop approach integrates best practice in adult learning. It has been developed to be relevant and practical by using hands-on experiential learning techniques. Participants will work independently and in small groups.
About the presenter
Krystin Martens, is a Lecturer and the Coordinator of Online Learning for the Centre for Program Evaluation at the University of Melbourne, Melbourne, Australia