I came into reproducible research in the context of air pollution and health epidemiology, where some of the research can be controversial in that it can affect national environmental policies. As a result, some of my work and the work of my colleagues has been challenged by industry groups. Scott Zeger and Francesca Dominici felt that the best way to deal with these (time-consuming) challenges was to make all of the data and code available so that people could conduct their own analyses on the same data that we had. If their approach was truly better, they should be able to demonstrate it on our data. My feeling was that this was the right thing to do on paper, however it did not account for the possibility that some users of the data would not hold themselves to the same standards of scientific conduct that we might. Therefore, the release of the data, in my view, encourage the misuse and poor analysis of the data, obfuscating the issues involved. While it is always possible to have an honest debate and to refute a faulty analysis, often this debate is lost once the headline in the New York Times has been printed. In the end, I think we as scientists do have an obligation to disseminate our work in the most transparent way as possible, but this needs to be balanced against the potentially nefarious interests of some parties. Regarding the questions posed: Investigators: I think an investigator's responsibility is relatively simple: he/she should be able to reproduce his/her analysis upon request at some point in the future. I think this is a minimum standard. One way to imagine this would be to consider a hypothetical audit of a given project in the future. Could an investigator reproduce a published result in that setting? Making research reproducible by others is a much more complex problem and depends critically on the resources of the others. Journals: I think journals should facilitate/coordinate the publication of computational research. They should not be responsible for somehow validating that work (as they are not responsible for validating other research) but to make sure that the work adheres to some minimal standard. In the computational arena, one possibility is to employ a technical/computational editor as is done at Biostatistics, Biometrical Journal, and Journal of Statistical Software. Institutions: I think the primary responsibility of academic institutions is to teach students how to conduct reproducible research. We should teach them the skills, tools, and best practices so that they know what is a "reproducible habit" as opposed to a "bad habit". The set of skills that encourage reproducibility should be considered core skills, much like knowledge of a programming language or calculus are core skills. Funding Agencies/Regulators: Funding agencies should play a role in developing the infrastructure needed for supporting reproducible research. In particular, long-lived repositories are needed for hosting data and code so that it can be available to others long after a paper is published. Also, funding agencies could encourage/fund the development of tools/software that allow investigators to easily make their work reproducible by others.