This practical guide from the UK Evaluation Society explores how evaluators can use AI tools responsibly and transparently across all stages of the evaluation process.
Key features of the resource
This guidance offers a practical, principle-based framework for the responsible and ethical integration of artificial intelligence (AI) into evaluation practice. Co-produced by the UK Evaluation Society’s AI Working Group, it outlines four core principles—Transparency and Competence, Human Control, Risk Management, and Quality Assurance—alongside detailed implementation advice and examples of disclosure and documentation.
Designed as voluntary guidance, it can be adapted for a wide range of organisational contexts, from NGOs to government departments and independent consultants. It supports evaluators in using AI tools without compromising methodological rigour or stakeholder trust.
How can the resource be used?
This guidance is especially useful for evaluation practitioners looking to responsibly experiment with or scale up AI usage. It supports ethical decision-making, helps organisations comply with data and professional standards, and promotes consistent disclosure and accountability practices. It is also a valuable reference for commissioners and procurement teams considering how to embed AI standards into contracts and partnerships.
Why would you recommend the resource to other people?
This is a clear, ethical, and practical guide grounded in professional values that fills a real gap in the evaluation field. It aims to assist evaluators to use AI tools safely, responsibly, and credibly, while still encouraging innovation and learning.
Sources
UK Evaluation Society (2025). AI in evaluation: Good practice guidelines for practitioners. UK Evaluation Society website. Retrieved from: https://evaluation.org.uk/new-ai-guidance-launched-for-evaluators/
Read more
'AI in Evaluation: Good Practice Guidelines for Practitioners' is referenced in:
Blog