The USAID-funded MEASURE Evaluation project is hosting a series of webinar discussions of the popular MEASURE Evaluation manual, How Do We Know If a Program Made a Difference? A Guide to Statistical Methods for Program Impact Evaluation. Each webinar in the series reviews key topics from a chapter through verbal discussion and graphical presentation.
The fifth and final webinar in the series, “Instrumental Variables,” will take place May 11 at 10am EDT. This webinar will consider impact evaluation estimation when program participation is non-randomly determined by the observed and unobserved, frustrating straightforward estimation of impact by simple comparison of average outcomes between samples of participants and non-participants. The instrumental variables approach appeals to the notion that, despite the overall non-randomness of program participation, there might be variables, called instruments, that offer experimental channels of program participation within which participation is effectively randomly determined. For instance, in a randomized control trial with incomplete adherence to experimental assignment, that original experimental assignment is a kind of instrument. This analogy also highlights one of the complexities of this method: the instrumental variables method identifies a local average treatment effect—which is program impact among those who comply with the experimental assignment implied by their value for the instrument. We will conclude with a discussion of one particularly popular recent application of instrumental variables, the “fuzzy” regression discontinuity design.
Access past series webinar recordings: https://www.measureevaluation.org/resources/webinars