Analyse each Key Evaluation Question

Embedded within broad key questions for R,M&E there are often different types of smaller questions. The main types of questions are:

Descriptive questions  Asking what is the context/situation and what has happened  Answer by: SampleUse measures, indicators or metricsCollect and or retrieve data (methods)Manage DataAnalyse dataVisualise data   
Causal Questions  Asking about what has contributed to the changes that have been observed  Answer by one or a combination of the options for Investigating Causal Attribution and Contribution   
Evaluative questions  Asking about whether the program is a success or the best option.  Answer by: Synthesise data from a single study or evaluation. See also Determine what 'success' looks like, part of FRAME.
Action questions  asking about what should be done based on the findings.  Answer by: Develop recommendations  

Action questions - asking about what should be done based on the findings.

Read more about these four types of questions on Better Evaluation here.

The ways of answering your KEQs will depend on what type of question you are asking.

Example: Deconstructing a question:

The table below deconstructs the Key Questions that were listed in a Terms of Reference for a C4D Assessment into smaller descriptive, evaluative, causal or action questions. 

Key Question Smaller, embedded questions (descriptive, evaluative, causal or action)  
1. What has been the visibility of the campaign and level of engagement of the general public in the UNICEF-led social media portals such as Facebook, UNICEF Viet Nam and UN websites, YouTube channel etc... 

1.1 What kind of content was posted on social media (descriptive)

1.2 What kind of engagement was there on the social media portals (descriptive)

1.3 How rich was the engagement (evaluative)


2. How effective has the outreach of the campaign's interventions in the community been, with a focus on how specific target groups of participants  interpreted or made sense of media messages (with reference to teachers, parents, caregivers, children; local authorities at provincial, district and commune levels; and community-based networks (Women's Union and Youth's Union)?


2.1 How did specific groups interpret and make sense of the messages? (descriptive)


2.2 To what extent did they make sense of the messages in the ways intended? (evaluative)


3. To what extent has the campaign reportedly contributed to raising knowledge and influencing positive attitude toward ending VAC among target groups of participants across the evaluated channels of communication? 

3.1 What changes in knowledge and attitudes have occurred and for who? (descriptive)

3.2 What has contributed to these changes? (causal) 



4. What worked well and what are areas for improvement in relation to the main messages of the campaign: violence against children is not justifiable, violence against children is preventable, speak out to end violence against children and violence against children is everyone's business?


4.1 What has worked (and not worked) about the messages, for whom, and in what circumstances? (evaluative) 


4.2 How can we improve? (action)


5. What factors (e.g. socio-cultural, ethical, moral, economic, etc) impeded or enhanced key attitudinal and behavioural interventions?

5.1 What were the bottlenecks for whom? (causal)   

6. What are lessons learnt from the project and recommendations for the next phase's interventions with a focus on community-based engagement for action?


6.1 What should we keep doing, what should we stop doing, what should we do better, and what should we start doing? (action)


6.2 How can we improve the design and implementation? (action) 


6.3 What is the best way to design a community based engagement program? (evaluative)



There are currently no comments. Be the first to comment on this page!

Add new comment

Login Login and comment as BetterEvaluation member or simply fill out the fields below.