BetterEvaluation lists events as a service to the evaluation community.
We do not officially endorse these events unless they are noted as a BetterEvaluation event.

UKES Annual Evaluation Conference 2018

The theme of the 2018 United Kingdom Evaluation Society (UKES)' Annual Evaluation Conference is: The Quality of Evidence from Evaluation - demand, supply, and use. Fundamentally, evaluation should provide information that is credible and useful, enabling the incorporation of lessons learned into decision-making processes. This year’s theme focuses on quality throughout the evaluation cycle. By framing the theme around the evaluation cycle, we can consider the demand for, production and uptake of high quality evidence from evaluations.

United Kingdom
2nd May 2018 to 3rd May 2018
Event City: 
London
Event State/Province: 
Event cost: 
Paid
Event type: 
Conference

Evaluations use often scarce human and financial resources, so they should be strongly demand-led, conducted to high standards, and produce useful evidence that informs stakeholders’ decision-making processes. Presenters of papers and posters may wish to consider quality aspects in one particular part of the cycle, or examine quality as a thread running through the entire cycle.

As noted in the keynote address at last year’s conference , some might consider we are in a ‘golden age of evaluation’, with high demand, associated funding, and growing sophistication about multiple methods that can assess complex situations. However, there are many challenges which question whether evaluations are fit for purpose in a ‘post-truth’ era, or at least an era in which ‘the narrative’ weighs more heavily than data-heavy analysis. Experts are distrusted, policies appear reactive and inconsistent, and communication condensed to 140 characters slogans. Publication of evaluations may be partial, biasing towards positive results, or they may be conducted well, albeit using inappropriate designs that do not answer the questions in which users are interested. Poor evaluations may thus be published because they have to be, regardless of quality. Alternatively, evaluations may not be published because the results are not what is ‘wanted’ by the commissioner.

Our current era, in which the means of communication have been so fundamentally changed, has been likened to Gutenberg developing the first printing press. The means of spreading information and influencing thinking has profoundly changed. Evaluations face challenges to be useful and influential in such circumstances, not least if the main output is a ‘classic’ evaluation report. Evaluators are exploring new opportunities presented by digital technology to widely communicate the learning and evidence from evaluations, but is this keeping pace with changes in society? Digital technologies also present opportunities to collect evaluation data and conduct analysis and synthesis in novel ways; they also provide platforms to involve citizens more directly in evaluations.

UKES considers this an extensive but pertinent theme when evaluation is paradoxically both burgeoning in some areas and under challenge to demonstrate utility in others. We would like to encourage evaluators, evaluation commissioners, and users of evidence from a broad range of sectors and disciplines, skills and perspectives to contribute to wide-ranging discussion of the quality of evaluations and the evidence from them by submitting an abstract, poster or signing up to attend the conference. 


FORMAT
The conference will comprise presentations from keynote speakers, panel discussions and interactive plenary sessions, together with parallel sessions for which participants from across the evaluation community are invited to submit abstracts. There will also be an opportunity for participants to display posters, and there will be a prize for the best poster. The whole event is designed to provide an atmosphere of debate and networking. 
In addition, the organising committee would welcome proposals from individuals or groups who would like to run alternative types of session at the conference. These may include: Round table discussions, think-tanks, demonstrations or experiential learning events, sessions which involve use or demonstration of digital technology in evaluation, or proposals to run whole themed multi-paper sessions (usually two – three presentations on a similar theme).


CALL FOR ABSTRACTS
Contributions are invited on methods, practice and policy from civil society, government, academia and business, both in the UK and overseas. Structured abstracts for papers, symposiums, discussion panels, workshops and posters should cover one of the following themes:

Theme 1: Improving the demand for high quality evaluations and building trust in evaluation evidence

  • Can evaluations counter the emergence of fake news and ‘alternative facts’? If so, how?
  • Stimulating the demand for high quality evaluations and evidence.
  • Better communication of evaluative evidence using infographics and data visualisation.
  • Making evaluations more influential.

Theme 2: Approaches to conducting high quality evaluations and generating useful evidence

  • The relationship between evaluation methods, rigour, and quality.
  • How do commissioner – evaluator relationships affect quality?
  • ‘Co-creation’: commissioner – evaluator co-working and joint analysis.
  • The state of the art in high quality, quantitative, qualitative and mixed method evaluations.
  • Making use of digital technology for evaluation data collection and analysis.
  • Increasing the involvement of citizens in evaluations through the use of digital technology, including evaluation and citizen science.

Theme 3. What is evaluation quality and how to assess it?

  • Approaches to evaluation rigour.
  • ‘Right rigour’ – what is a proportionate level of quality for a given evaluation question?
  • Trade-off in evaluation quality. Eg can timely, but quick-and-dirty evaluations be more influential than slower but more traditionally rigorous ones?
  • Is evaluation quality methods-driven?
  • Frameworks for strength of evidence - do current frameworks adequately deal with quality and strength of evidence in non-experimental and mixed method evaluations?

Theme 4: Exploring the links between quality dimensions and evaluation use

  • Which aspects of evaluation quality have most effect on use and uptake, and how can this be improved?
  • How to strike the right balance between evaluation quality and uptake of the evidence generated?
  • When (if ever) is it right to ‘dare’ to have a low quality evaluation?
  • Are our evaluation products fit for purpose? Is an evaluation report or a peer-reviewed publication enough?
  • Going beyond the report – what other type of output can be produced to improve evaluation use and uptake of evidence?

 

Theme 5: Pursuing quality throughout the evaluation cycle

  • What are commissioners doing to procure better evaluations?
  • How do we make sure evaluation commissioners are asking the right questions? Do we know what the right questions are?
  • What is the state of the art in quality assurance of evaluations?
  • How is information technology being used to improve evaluations?
  • How is evaluation improved through citizen science and the democratisation of evaluation?
  • What does good evaluation use and evidence uptake look like, and how is it best achieved?