Given their complexity and the resources they require, you should only consider an RCT or a quasi-experimental design if you already have some evidence to already suggest program effectiveness. The questions below can help you judge this.
Status of your programme | Should you consider an experimental or quasi-experimental design? |
Your programme is small-scale with scope to influence only limited change in among small numbers of direct beneficiaries. | No. You want to keep the resources you invest in evaluation fairly modest given your program is only small with scope for limited impact. You should consider conducting simple self-evaluation using a basic baseline and endline survey plus quantitative and qualitative monitoring. |
Your programme is new. it has either just been designed or has been adapted for the first time in your context. | No. It is too early to invest in an expensive experimental (RCT) or quasi-experimental design. A simple baseline/endline evaluation with direct program beneficiaries will suffice. Including an element of process evaluation and qualitative elements is recommended. |
Your medium to large programme has shown some promising results through quantitative and/or qualitative findings as measured by a baseline/endline evaluation. | Yes. If you have promising results and sufficient budget, this could be a good opportunity to do a robust impact evaluation using experimental (RCT) or quasi-experimental methods. Including an element of process evaluation and qualitative elements is recommended (see following slides). |
Your programme has been evaluated by an RCT or quasi-experimental design, which found positive evidence of impact. Your programme has now entered a new phase and is being scaled up. | Maybe. You may wish to implement a further RCT to test this new phase of the program. Alternatively, you could do a simple baseline/endline evaluation in order to check that the program is still producing positive results. As part of this, a process evaluation could be useful to track the quality of implementation in the scaled-up version of the program. With any design, investing in in-depth qualitative evaluation to further explore how the program is producing change could help further refine and optimize the program design and implementation – and generate further learning. |



