Difference between revisions of "RepeatabilityCentral"

From VistrailsWiki
Jump to navigation Jump to search
 
(32 intermediate revisions by the same user not shown)
Line 1: Line 1:
"This page is no longer maintained. Please see http://www.reproduciblescience.org"
== News ==
== News ==


* The [http://www.vistrails.org/index.php/ExecutablePapers VisTrails methodology and infrastructure for creating provenance-rich, executable publications] has been selected as a finalist of the [http://www.executablepapers.com Executable Paper Grand Challenge]. We will present this work at the [http://www.iccs-meeting.org/iccs2011/index.html ICCS Meeting], in Tsukuba, Japan.
* We have given a tutorial at SIGMOD 2012 on reproducibility: [http://vgc.poly.edu/~juliana/talks/reproducibility-tutorial2012.pdf Computational reproducibility: state-of-the-art, challenges, and database research opportunities]
 
* The DataBase experiments Repository (DBXR ) is now up and running, thanks to [http://www.itu.dk/people/phbo Philippe Bonnet]. This site serves as a repository for  experiments related to database research. Currently, it supports the submission and review of results published at PVLDB and ACM Sigmod. For more details see http://www.dbxr.org.
 
* The VisTrails 2.0 has been released and it is available for download from http://www.vistrails.org/index.php/Downloads. VisTrails now includes support for the creation of reproducible papers.
 
* The discipline involved in the publication of reproducible results can actually lead to higher quality science: http://www.nature.com/nature/journal/v483/n7391/full/483531a.html
 
* There has been increased concern about reproducibility in clinical trials:
** http://www.nature.com/nature/journal/v483/n7391/full/483531a.html
** http://www.readthehook.com/103149/junk-science-most-preclinical-cancer-studies-dont-replicate
** http://pipeline.corante.com/archives/2012/03/29/sloppy_science.php
 
* Recently, there have been a number of articles in the NYTimes that underscore the importance of reproducible results. See:
** [http://www.nytimes.com/2011/07/08/health/research/08genes.html How Bright Promise in Cancer Testing Fell Apart]
** [http://www.nytimes.com/2010/09/24/science/24retraction.html?\_r=1\&emc=eta1 Nobel Laureate Retracts Two Papers Unrelated to Her Prize]
** [http://www.nytimes.com/2011/06/26/opinion/sunday/26ideas.html?\_r=2 It's Science, but Not Necessarily Right]
 
* There is an interesting new site that is tracking paper retractions, check it out:
** http://retractionwatch.wordpress.com/
 
* Juliana Freire gave a talk about how to create executable papers at the AMP Workshop on Reproducible Research. Vancouver, Canada. July 14, 2011.
 
* The [http://www.vistrails.org/index.php/ExecutablePapers VisTrails methodology and infrastructure for creating provenance-rich, executable publications] has been selected as a finalist of the [http://www.executablepapers.com Executable Paper Grand Challenge]. We presented this work at the [http://www.iccs-meeting.org/iccs2011/index.html ICCS Meeting], in Singapore.
 
* [[More News]]


== Project Description ==  
== Project Description ==  
'''Principal Investigators:''' [http://www.cs.utah.edu/~juliana Juliana Freire] and [http://www.cs.nyu.edu/shasha/ Dennis Shasha]


A hallmark of the scientific method has been that experiments should be described in enough detail that they can be repeated and perhaps generalized.  This implies the ability to redo experiments in nominally equal settings and also to test the generalizability of a claimed conclusion by trying similar experiments in different settings. In principle, this should be easier for computational experiments than for natural science experiments, because not only can computational processes be automated but also computational systems do not suffer from the 'biological variation' that plagues the life sciences.  Unfortunately, the state of the art falls far short of this goal.  Most computational experiments are specified only informally in papers, where experimental results are briefly described in figure captions; the code that produced the results is seldom available; and configuration parameters change results in unforeseen ways. Because important scientific discoveries are often the result of sequences of smaller, less significant steps, the ability to publish results that are fully documented and reproducible is necessary for advancing science.  While concern about repeatability and generalizability cuts across virtually all natural, computational, and social science fields, no single field has identified this concern as a target of a research effort.
A hallmark of the scientific method has been that experiments should be described in enough detail that they can be repeated and perhaps generalized.  This implies the ability to redo experiments in nominally equal settings and also to test the generalizability of a claimed conclusion by trying similar experiments in different settings. In principle, this should be easier for computational experiments than for natural science experiments, because not only can computational processes be automated but also computational systems do not suffer from the 'biological variation' that plagues the life sciences.  Unfortunately, the state of the art falls far short of this goal.  Most computational experiments are specified only informally in papers, where experimental results are briefly described in figure captions; the code that produced the results is seldom available; and configuration parameters change results in unforeseen ways. Because important scientific discoveries are often the result of sequences of smaller, less significant steps, the ability to publish results that are fully documented and reproducible is necessary for advancing science.  While concern about repeatability and generalizability cuts across virtually all natural, computational, and social science fields, no single field has identified this concern as a target of a research effort.


This collaborative project between the University of Utah and New York University consists of tools and infrastructure that supports the process of sharing, testing and re-using scientific experiments and results by leveraging and extending the infrastructure provided by provenance-enabled scientific workflow systems. The project explores three key research questions: 1) How to package and publish compendia of scientific results that are reproducible and generalizable. 2) What are appropriate algorithms and interfaces for exploring, comparing, re-using the results or potentially discovering better approaches for a given problem? 3) How to aid reviewers to generate experiments that are most informative given a time/resource limit.
This project is developing a suite of tools and infrastructure that supports the process of sharing, testing and re-using scientific experiments and results by leveraging and extending the infrastructure provided by provenance-enabled scientific workflow systems. The project explores three key research questions: 1) How to package and publish compendia of scientific results that are reproducible and generalizable. 2) What are appropriate algorithms and interfaces for exploring, comparing, re-using the results or potentially discovering better approaches for a given problem? 3) How to aid reviewers to generate experiments that are most informative given a time/resource limit.


An expected result of this work is a software infrastructure that allows authors to create workflows that encode the computational processes that derive the results (including data used, configuration parameters set, and underlying software), publish and connect these to publications where the results are reported. Testers (or reviewers) can repeat and validate results, ask questions anonymously, and modify experimental conditions.  Researchers, who want to build upon previous works, are able to search, reproduce, compare and analyze experiments and results. The infrastructure helps scientists in any discipline to construct, publish and share reproducible results.
An expected result of this work is a software infrastructure that allows authors to create workflows that encode the computational processes that derive the results (including data used, configuration parameters set, and underlying software), publish and connect these to publications where the results are reported. Testers (or reviewers) can repeat and validate results, ask questions anonymously, and modify experimental conditions.  Researchers, who want to build upon previous works, are able to search, reproduce, compare and analyze experiments and results. The infrastructure helps scientists in any discipline to construct, publish and share reproducible results.


== What scientists are doing about reproducibility ==
* The journal Biostatistics has an associate editor for reproducibility who can assign grades of merit to conditionally accepted papers: D: data are available, C: code is available, and R: the AE could run the code and reproduce the results without much effort. (http://magazine.amstat.org/blog/2011/01/01/scipolicyjan11)


* Professor Matthias Troyer (ETH Zurich) and his collaborators have published a number of papers whose results are fully reproducible. He is using VisTrails to both carry out the experiments and to package them for publication. Here are some of this reproducible papers:
** [http://iopscience.iop.org/1742-5468/2011/05/P05001 The ALPS project release 2.0: open source software for strongly correlated systems], (by Bela Bauer et al.). also available at http://arxiv.org/pdf/1101.2646.pdf
** [http://arxiv.org/abs/1106.3267 Galois Conjugates of Topological Phases] (by Michael H. Freedman, Jan Gukelberger, Matthew B. Hastings, Simon Trebst, Matthias Troyer, Zhenghan Wang)


==  Infrastructure to Create Provenance-Rich Papers ==
==  Infrastructure to Create Provenance-Rich Papers ==


The first prototype of our infrastructure is described in http://www.cs.utah.edu/~juliana/pub/vistrails-executable-paper.pdf.
The first prototype of our infrastructure is described in http://www.vistrails.org/index.php/ExecutablePapers. We have also written a paper that will appear in the Proceedings of the International Conference on Computational Science, 2011: [http://www.cs.utah.edu/~juliana/pub/vistrails-executable-paper.pdf http://www.cs.utah.edu/~juliana/pub/vistrails-executable-paper.pdf]
 
To see our infrastructure in action, check out the '''videos''' and '''tutorial''' below.
 
VisTrails 2.0 allows the inclusion of reproducible results in LaTeX/PDF documents. We provide a LaTeX package that allows users to add links to their results in the LaTeX source. For example:
 
<code>
\usepackage{vistrails}
 
...
 
\begin{figure}
 
\begin{center}


To see our infrastructure in action, check out the following '''videos'''.
\subfigure[a=0.9]{\vistrail[filename=alps.vt, version=2, pdf]{width=8cm}}


=== Editing an executable paper written using LaTeX and VisTrails ===
\subfigure[a=0.9]{\vistrail[filename=alps.vt, version=11, pdf,buildalways]{width=8cm}}
 
\caption{A figure produced by an ALPS VisTrails workflow. Clicking the figure retrieves the workflow used to
create it. Opening that workflow on a machine with VisTrails and ALPS installed lets the reader execute the full calculation.}
 
\end{center}
 
\end{figure}
</code>
 
Once the LaTeX document is compiled, the figure in the PDF becomes active, and when clicked, it will invoke VisTrails and reproduce the result.
You can also upload your results to [http://www.crowdlabs.org CrowdLabs] and export them to Web sites or Wikis, where users can interact with them through a Web browser. See e.g., http://www.crowdlabs.org/vistrails/medleys/details/26/
 
 
=== Video: Editing an executable paper written using LaTeX and VisTrails ===
* {{qt|link=http://www.vistrails.org/images/executable_paper_latex.mov|text=View}}  
* {{qt|link=http://www.vistrails.org/images/executable_paper_latex.mov|text=View}}  
* [http://www.vistrails.org/download/download.php?type=MEDIA&id=executable_paper_latex.mov Download]
* [http://www.vistrails.org/download/download.php?type=MEDIA&id=executable_paper_latex.mov Download]


=== Exploring a Web-hosted paper using server-based computation ===
=== Video: Exploring a Web-hosted paper using server-based computation ===
* {{qt|link=http://www.vistrails.org/images/executable_paper_server.mov|text=View}}
* {{qt|link=http://www.vistrails.org/images/executable_paper_server.mov|text=View}}
* [http://www.vistrails.org/download/download.php?type=MEDIA&id=executable_paper_server.mov Download]
* [http://www.vistrails.org/download/download.php?type=MEDIA&id=executable_paper_server.mov Download]
=== Tutorial: Editing an executable paper using VisTrails and LaTeX extensions ===
* [http://www.vistrails.org/index.php/ExecutableLatexTutorial http://www.vistrails.org/index.php/ExecutableLatexTutorial]


== SIGMOD Repeatability Effort ==
== SIGMOD Repeatability Effort ==
Line 39: Line 100:
* Packaging an experiment on querying Wikipedia: [[WikiQuery]]
* Packaging an experiment on querying Wikipedia: [[WikiQuery]]


== Publications and Presentations ==  
== Publications ==


* Exploring the Coming Repositories of Reproducible Experiments: Challenges and Opportunities, by Juliana Freire, Philippe Bonnet and Dennis Shasha. In PVLDB, 2011. ''To appear''.
* [http://vgc.poly.edu/~juliana/pub/freire-sigmod2012.pdf  Computational reproducibility: state-of-the-art, challenges, and database research opportunities], by Juliana Freire, Philippe Bonnet, Dennis Shasha. In SIGMOD Conference, 593-596, 2012.


* A Provenance-Based Infrastructure for Creating Executable Papers, by David Koop, Emanuele Santos, Phillip Mates, Huy Vo, Philippe Bonnet, Matthias Troyer, Dean Williams, Joel Tohline, Juliana Freire and Claudio Silva. In Proceedings of ICCS, 2011. ''To appear''.
* [http://www.sigmod.org/publications/sigmod-record/1106/pdfs/08.report.bonnet.pdf Repeatability and Workability Evaluation of SIGMOD 2011], by Philippe Bonnet, Stefan Manegold, Matias Bjorling, Wei Cao, Javier Gonzales, Joel Granados, Nancy Hall, Stratos Idreos, Milena Ivanova, Ryan Johnson, David Koop, Tim Kraska, René Müller, Dan Olteanu, Paolo Papotti, Christine Reilly, Dimitris Tsirogiannis, Cong Yu, Juliana Freire and Dennis Shasha. In SIGMOD Record, vol. 40, no. 2, pp. 45-48, 2011.


* [http://www.cs.utah.edu/~juliana/talks/freire-beyondthepdf.pdf Towards an Infrastructure to Create Provenance-Rich Papers], by Juliana Freire. Presentation at the [https://sites.google.com/site/beyondthepdf Beyond The PDF Workshop], San Diego, January 19-21, 2011.
* [http://vgc.poly.edu/~juliana/pub/freire-vldb2011.pdf Exploring the Coming Repositories of Reproducible Experiments: Challenges and Opportunities], by Juliana Freire, Philippe Bonnet and Dennis Shasha. In PVLDB, vol. 4, no. 12, pp. 1494-1497, 2011.
 
* [http://vgc.poly.edu/~juliana/pub/vistrails-executable-paper.pdf A Provenance-Based Infrastructure for Creating Executable Papers], by David Koop, Emanuele Santos, Phillip Mates, Huy Vo, Philippe Bonnet, Matthias Troyer, Dean Williams, Joel Tohline, Juliana Freire and Claudio Silva. In Proceedings of ICCS, 2011.
 
== Presentations ==
 
* [http://vgc.poly.edu/~juliana/talks/reproducibility-tutorial2012.pdf Computational reproducibility: state-of-the-art, challenges, and database research opportunities]. Juliana Freire, Philippe Bonnet and Dennis Shasha. Tutorial presented at ACM SIGMOD 2012.
 
* [http://vgc.poly.edu/~juliana/talks/2011-amp-reproducible-research.pdf Provenance-Rich Science.] Juliana Freire. UFRJ, Brazil. August 1, 2011.
 
* [http://vgc.poly.edu/~juliana/talks/2011-ufrj-reproducible-research.pdf  A Provenance-Based Infrastructure for Creating Reproducible Papers.] AMP 2011 Workshop on Reproducible Research. Vancouver, Canada. July 14, 2011.
 
* [http://vgc.poly.edu/~juliana/talks/2011-forth-provenance.pdf  Provenance-Rich Science.] Juliana Freire. FORTH. Crete, Greece.  June 22, 2011.


* Publishing Reproducible Results with VisTrail, by Juliana Freire and  Claudio Silva. Presentation at the [http://meetings.siam.org/sess/dsp_programsess.cfm?SESSIONCODE=11845 SIAM Workshop on Verifiable, Reproducible Research and Computational Science], Reno, March 4th, 2011.
* Publishing Reproducible Results with VisTrail, by Juliana Freire and  Claudio Silva. Presentation at the [http://meetings.siam.org/sess/dsp_programsess.cfm?SESSIONCODE=11845 SIAM Workshop on Verifiable, Reproducible Research and Computational Science], Reno, March 4th, 2011.
* [http://vgc.poly.edu/~juliana/talks/2011-beyondthepdf-executablepaper.pdf Towards an Infrastructure to Create Provenance-Rich Papers], by Juliana Freire. Presentation at the [https://sites.google.com/site/beyondthepdf Beyond The PDF Workshop], San Diego, January 19-21, 2011.
== People ==
Several people have contributed to this project, including:
* [http://www.diku.dk/hjemmesider/ansatte/bonnet/ Philippe Bonnet]
* [http://www.cs.utah.edu/~juliana/ Juliana Freire]
* [http://www.cs.utah.edu/~dakoop/ David Koop]
* [http://emanuelesantos.net/Emanuele_Santos/Home.html Emanuele Santos]
* [http://cs.nyu.edu/shasha Dennis Shasha]
* [http://www.cs.utah.edu/~csilva/ Claudio Silva]
* [http://www.phys.lsu.edu/~tohline Joel Tohline]
* [http://www.itp.phys.ethz.ch/people/troyer Matthias Troyer]
* [http://www.sci.utah.edu/~hvo/homepage/index.html Huy Vo]
* The [http://www.vistrails.org VisTrails] team


== Funding ==
== Funding ==


This project is sponsored by the National Science Foundation awards [http://www.nsf.gov/awardsearch/showAward.do?AwardNumber=1050422 IIS#1050422] and [http://www.nsf.gov/awardsearch/showAward.do?AwardNumber=1050388 IIS#1050388].
This project is sponsored by the National Science Foundation awards [http://www.nsf.gov/awardsearch/showAward.do?AwardNumber=1139832 IIS#1139832], [http://www.nsf.gov/awardsearch/showAward.do?AwardNumber=1050422 IIS#1050422][http://www.nsf.gov/awardsearch/showAward.do?AwardNumber=1050388 IIS#1050388], [http://www.nsf.gov/awardsearch/showAward.do?AwardNumber=0905385 IIS#0905385], and [http://www.nsf.gov/awardsearch/showAward.do?AwardNumber=0751152 CNS#0751152].

Latest revision as of 16:01, 22 August 2012

"This page is no longer maintained. Please see http://www.reproduciblescience.org"

News

  • The DataBase experiments Repository (DBXR ) is now up and running, thanks to Philippe Bonnet. This site serves as a repository for experiments related to database research. Currently, it supports the submission and review of results published at PVLDB and ACM Sigmod. For more details see http://www.dbxr.org.
  • Juliana Freire gave a talk about how to create executable papers at the AMP Workshop on Reproducible Research. Vancouver, Canada. July 14, 2011.

Project Description

A hallmark of the scientific method has been that experiments should be described in enough detail that they can be repeated and perhaps generalized. This implies the ability to redo experiments in nominally equal settings and also to test the generalizability of a claimed conclusion by trying similar experiments in different settings. In principle, this should be easier for computational experiments than for natural science experiments, because not only can computational processes be automated but also computational systems do not suffer from the 'biological variation' that plagues the life sciences. Unfortunately, the state of the art falls far short of this goal. Most computational experiments are specified only informally in papers, where experimental results are briefly described in figure captions; the code that produced the results is seldom available; and configuration parameters change results in unforeseen ways. Because important scientific discoveries are often the result of sequences of smaller, less significant steps, the ability to publish results that are fully documented and reproducible is necessary for advancing science. While concern about repeatability and generalizability cuts across virtually all natural, computational, and social science fields, no single field has identified this concern as a target of a research effort.

This project is developing a suite of tools and infrastructure that supports the process of sharing, testing and re-using scientific experiments and results by leveraging and extending the infrastructure provided by provenance-enabled scientific workflow systems. The project explores three key research questions: 1) How to package and publish compendia of scientific results that are reproducible and generalizable. 2) What are appropriate algorithms and interfaces for exploring, comparing, re-using the results or potentially discovering better approaches for a given problem? 3) How to aid reviewers to generate experiments that are most informative given a time/resource limit.

An expected result of this work is a software infrastructure that allows authors to create workflows that encode the computational processes that derive the results (including data used, configuration parameters set, and underlying software), publish and connect these to publications where the results are reported. Testers (or reviewers) can repeat and validate results, ask questions anonymously, and modify experimental conditions. Researchers, who want to build upon previous works, are able to search, reproduce, compare and analyze experiments and results. The infrastructure helps scientists in any discipline to construct, publish and share reproducible results.

What scientists are doing about reproducibility

  • The journal Biostatistics has an associate editor for reproducibility who can assign grades of merit to conditionally accepted papers: D: data are available, C: code is available, and R: the AE could run the code and reproduce the results without much effort. (http://magazine.amstat.org/blog/2011/01/01/scipolicyjan11)

Infrastructure to Create Provenance-Rich Papers

The first prototype of our infrastructure is described in http://www.vistrails.org/index.php/ExecutablePapers. We have also written a paper that will appear in the Proceedings of the International Conference on Computational Science, 2011: http://www.cs.utah.edu/~juliana/pub/vistrails-executable-paper.pdf

To see our infrastructure in action, check out the videos and tutorial below.

VisTrails 2.0 allows the inclusion of reproducible results in LaTeX/PDF documents. We provide a LaTeX package that allows users to add links to their results in the LaTeX source. For example:

\usepackage{vistrails}

...

\begin{figure}

\begin{center}

\subfigure[a=0.9]{\vistrail[filename=alps.vt, version=2, pdf]{width=8cm}}

\subfigure[a=0.9]{\vistrail[filename=alps.vt, version=11, pdf,buildalways]{width=8cm}}

\caption{A figure produced by an ALPS VisTrails workflow. Clicking the figure retrieves the workflow used to create it. Opening that workflow on a machine with VisTrails and ALPS installed lets the reader execute the full calculation.}

\end{center}

\end{figure}

Once the LaTeX document is compiled, the figure in the PDF becomes active, and when clicked, it will invoke VisTrails and reproduce the result. You can also upload your results to CrowdLabs and export them to Web sites or Wikis, where users can interact with them through a Web browser. See e.g., http://www.crowdlabs.org/vistrails/medleys/details/26/


Video: Editing an executable paper written using LaTeX and VisTrails

Video: Exploring a Web-hosted paper using server-based computation

Tutorial: Editing an executable paper using VisTrails and LaTeX extensions

SIGMOD Repeatability Effort

As part of this project, in collaboration with Philippe Bonnet, we are using (and extending) our infrastructure to support the SIGMOD Repeatability effort.


Below are some case studies that illustrate how authors can create provenance-rich and reproducible papers, and how reviewers can both reproduce the experiments and perform workability tests:

Publications

  • Repeatability and Workability Evaluation of SIGMOD 2011, by Philippe Bonnet, Stefan Manegold, Matias Bjorling, Wei Cao, Javier Gonzales, Joel Granados, Nancy Hall, Stratos Idreos, Milena Ivanova, Ryan Johnson, David Koop, Tim Kraska, René Müller, Dan Olteanu, Paolo Papotti, Christine Reilly, Dimitris Tsirogiannis, Cong Yu, Juliana Freire and Dennis Shasha. In SIGMOD Record, vol. 40, no. 2, pp. 45-48, 2011.

Presentations

People

Several people have contributed to this project, including:

Funding

This project is sponsored by the National Science Foundation awards IIS#1139832, IIS#1050422, IIS#1050388, IIS#0905385, and CNS#0751152.