AIRLEAP 2017 conference session on replication

From ReplicationWiki
Revision as of 20:41, 25 July 2017 by Jan H. Höffler (Talk | contribs)

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

The Association for Integrity and Responsible Leadership in Economics and Associated Professions (AIRLEAP) will hold the Economics Conference “An Urgency for Evidence and Transparency in Economic Analysis and Policy” in St Louis, USA, Friday October 13th to Saturday, October 14th. It will include the following session on replication in empirical economics:

"Replication diagnostic"
Abstract
This paper provides researchers with an objective list of checks to consider when planning a replication study with the objective of validating findings for informing policy. These replication studies should begin with a pure replication of the published results and then reanalyse the original data to address the original research question. We present tips for replication exercises in four categories: validity of assumptions, data transformations, estimation methods, and heterogeneous impacts. For each category we offer an introduction, a tips checklist, some examples of how these checks have been employed, and a set of resources that provide statistical and econometric details.
"Sharing replication material and its impact on citations"
Abstract

Only a small minority of economics journals have a mandatory and enforced policy to publish data and code for replication along with empirical studies. These journals are cited more often than others, and this is significant when controlling for time and journal effects. Individual reviewers could take initiative by asking for replicable empirical results. The American Journal of Political Science sets an example by having all empirical studies externally checked for replicability prior to publication. The ReplicationWiki provides information on which studies have been replicated, for which ones replication material is available, and what kind of data was used.

"Push Button Replication: An exercise in futility or lesson learning"
Abstract

Our paper presents the preliminary results of the International Initiative for Impact Evaluation’s (3ie) “push button replication” (PBR) project. The PBR project has two objectives. The first is to establish procedures and standards for push button replication so that original authors and replication researchers can better align their expectations and actions around this third-party verification process. The second is to test whether development impact evaluations, those studies using experimental and quasi-experimental methods to evaluate the effectiveness of development programs in low- and middle-income countries, are generally push button verifiable. Development impact evaluations cross multiple disciplines but use similar data collection and estimation methods and, more pertinent to the question of verification, they often directly influence policy and programming, both within the situation of the evaluated program and in other situations and countries.

PBR research attempts to confirm the validity of published results using both the original data and the programming code from a study. The premise behind a PBR study is that the third party researcher should not need to make any significant adjustments, write new code, or conduct additional analysis in order to arrive at the published results. A PBR is thus a step before pure replication. While a pure replication asks the question can we reproduce the published results using the original data and the methods described in the original study, a PBR asks the question can we use the original authors’ programming code with the original data to reproduce the published results. Pure replication studies can uncover errors where the programming code incorrectly implements the estimation methods described. PBR studies reveal cases where there are no or insufficient data and code or where the programs do not run.

We undertook the 3ie PBR project with two main objectives. The first is establishing clear procedures and standards for this kind of replication exercise to align expectations and ease communications between replication researchers and original authors. A PBR should be the most straightforward and objective replication exercise. We have developed a PBR protocol to set out in clear terms what the process should be, what is expected of the replication researchers and the original authors, and how the results of the PBR are rated or classified.

The second objective is to test the push button replicability of development impact evaluations. These studies are central to the mission of 3ie, but they are also often highly policy influential, which should increase the scrutiny to which they are subjected. We have selected as our sample of articles all the development impact evaluations published in 2014 in the top ten journals for these studies. We determined the top ten journals as those that published the greatest number of development impact evaluations during the period 2010 through 2012 as catalogued in 3ie’s comprehensive impact evaluation repository. This selection results in a sample of 122 studies. These studies span several disciplines including public health, political science, economics, and others.

Comments and questions are welcome on this page's discussion page before and until one week after the conference. You can also send us your related work and we could have a number of papers discussed here online.

This session will be a continuation of the initiative we started with our workshop on Transparency and Replication in San Francisco in January for which we already compiled some online materials and the Replication Session at the First Plenary Conference of the Institute for New Economic Thinking Young Scholars Initiative in Budapest in 2016.

Personal tools
Namespaces

Variants
Actions
Navigation
Toolbox