ML Reproducibility Challenge 2020
Welcome to the ML Reproducibility Challenge 2020! This is already the fourth edition of this
event (see V1,
and we are excited this year to announce that we are broadening our coverage
of conferences and papers to cover several new top venues, including:
The primary goal of this event is to encourage the publishing and sharing of scientific results that
are reliable and reproducible. In support of this, the objective of this challenge is to investigate
reproducibility of papers accepted for publication at top conferences by inviting members of the
community at large to select a paper, and verify the empirical results and claims in the paper by
reproducing the computational experiments, either via a new implementation or using code/data or
other information provided by the authors.
All submitted reports will be peer reviewed and shown next to the original papers on
Papers with Code.
Reports will be peer-reviewed via OpenReview. Every year, a small number of these reports,
selected for their clarity, thoroughness, correctness and insights, are selected for publication
in a special edition of the journal ReScience.
- NeurIPS accepted 2020 papers are being added on a rolling basis to OpenReview.
- Comments are now enabled on all listed papers. Authors of listed papers can now subscribe to recieve notifications about claims and comments on their papers!
Invitation to participate
The challenge is a great event for community members to participate in shaping scientific practices
and findings in our field. We particularly encourage participation from:
- Course instructors of advanced ML, NLP, CV courses, who can use this challenge as a
course assignment or project. Submit your course
information here and we will feature your course below!
- Organizers of hackathons.
- Members of ML developer communities
- ML enthusiasts everywhere!
- Machine Learning CS-433, EPFL, Switzerland
- Human in the loop machine learning, Epita, France
- Deep Learning, Brown University, USA
- Applied Machine Learning COMP 551, McGill University, Canada
- Fairness, Accountability, Confidentiality and Transparency in AI, University of Amsterdam, Netherlands
Announcement of the challenge : September 4th, 2020
Challenge goes LIVE : September 23rd, 2020 (* EMNLP 2020 papers are coming soon!)
Early submission deadline (encouraged, before NeurIPS) : December 4th, 2020
Late submission deadline (to be considered for peer review) : January 29th, 2021
Author Notification deadline for journal special issue: March 30th, 2021
How to participate
Koustuv Sinha (McGill University / FAIR)
Joelle Pineau (McGill University / FAIR)
Jessica Forde (Brown University)
Jesse Dodge (Carnegie Mellon University)
Robert Stojnic (Papers with Code / FAIR)
Parag Pachpute, Melisa Bok, Celeste Martinez Gomez, Mohit Uniyal, Andrew McCallum (OpenReview / University of Massachusetts Amherst)
Nicolas Rougier, Konrad Hinsen (ReScience)