Primary tabs

Show search hints
Did you mean
results make

Search results

  1. Control Group

    Evaluation Option
    Men in the field working with bird branta canadensis photo by Day Robert

    A control group is an untreated research sample against which all other groups or samples in the research is compared. A control group is constructed to produce an estimate of the counter-factual, that is,  what would have happened if an intervention had not been implemented. A control group is constructed by randomly assigning people to either the control group or to one or more "treatment" groups.

  2. Regression Discontinuity

    RD Design, Regression Discontinuity Design
    Evaluation Option

    Regression Discontinuity Design (RDD) is a quasi-experimental evaluation option that measures the impact of an intervention, or treatment, by applying a treatment assignment mechanism based on a continuous eligibility index which is a variable with a continuous distribution. 

  3. Check Results Match Expert Predictions

    Evaluation Option

    Expert predictions can be a useful part of developing the program theory. Program staff can draw expert predictions from the literature or by engaging a group of experts.

  4. Check Results Match a Statistical Model

    Evaluation Option

    Program staff may develop a statistical model as part of the project theory design. Statistical models can be useful tools to predict elements of the program:

    • Cost
    • Time
    • Comparison between groups
  5. Tiny Tool Results Chain

    Evaluation Option

    Tiny tool results chain maps both positive and negative possible impacts from an intervention.

  6. Results Chain

    Pipeline model, Logic Model, Input-Output model
    Evaluation Option

    "Results chain or pipeline logic models represent a program theory as a linear process with inputs and activities at the front and long-term outcomes at the end. Various models have been developed for different purposes each with relative advantages and disadvantages." Funnell, S. and Rogers, P. (2011) p 387

  7. Data Dashboard

    Evaluation Option
    Organizational Dashboard

    Stephen Few defines a dashboard as: "A data dashboard is a visual display of the most important information needed to achieve one or more objectives, with the data consolidated and arranged on a single screen so the information can be monitored at a glance" (Few, 2004). 

    Data dashboards typically include several visualisations such as graphs or other visual representations of data, along with minimal text to describe the indicators being displayed on the dashboard. By displaying these visualisations on a single screen, the user can directly compare and draw conclusions from the data ‘at a glance’, which is not possible if the data is split across several screens or requires scrolling to view.  

  8. Mobile Data Collection

    MDC, Mobile Phone Logging, Mobile Technology

    Mobile Data Collection (MDC) is the use of mobile phones, tablets or PDAs for programming or data collection. MDC can be very useful to the evaluator who is collecting quantitative data for their evaluation or abstracting data for an evaluation.

  9. Powerpoint

    Slide Show, Slides
    Evaluation Option

    Structuring presentations with a series of powerpoint slides is now the most common way of presenting information to groups. 

  10. Logframe

    Logical Framework, Logical Framework Approach (LFA), log frame
    Evaluation Option

    ‘Logical Framework’, or ‘logframe’, describes both a general approach to project or programme planning, monitoring and evaluation, and – in the form of a ‘logframe matrix’ – a discrete planning and monitoring tool for projects and programmes. Logframe matrices are developed during project/programme design and appraisal stages, and are subsequently updated throughout implementation while remaining an essential resource for ex-post evaluation.