ECIR also strongly encourages the submission of papers that repeat, reproduce, generalize, and analyze prior work. Please refer to the ACM “Artifact Review and Badging” guidelines (https://www.acm.org/publications/policies/artifact-review-badging) for consistent use of the terminology, which is heterogeneous across disciplines.
In particular, we solicit replicability (different team, same experimental setup) and reproducibility (different team, different experimental setup) papers.
Submissions from the same authors – i.e., repeatability (same team, same experimental setup) papers – of the reproduced experiments will not be accepted.
Reproducibility is key for establishing research to be reliable, referenceable, and extensible for the future. Emphasize your motivation for selecting the paper/papers, the process of how results have been attempted to be reproduced (successful or not), the communication that was necessary to gather all information, the potential difficulties encountered, and the result of the process. A successful reproduction of the work is not a requirement, but it is crucial to provide a precise and rigid evaluation of the process to allow lessons to be learned for the future.
Submissions are welcome in any of the areas related to aspects of Information Retrieval and should be up to 12 pages in length plus additional pages for references.
We expect authors to provide a link where all materials required for repeating the tests performed, including code, data, and clear instructions on how to run the experiments can be downloaded by the reviewers. For this reason, the review process is single-blind so that personal or institutional repositories can be used for the submission.
All reproducibility-track papers will be evaluated along with the following criteria (when applicable):
- Were there key practical information (algorithms, parameter settings, software libraries, data collections) not reported in the original paper?
- Was the original work not supported from the theoretical point of view?
- Were the original experiments not clear about important points or lacking confirmation for some of the original claims?
- Are there new baselines and experiments presented in the reproduced paper?
- Is the reproduced paper proposing new evaluation criteria (new measures, statistical tests, …)?
- How important is the reproduction of the experiments to the community?
- How obvious are the conclusions achieved?
- Do the reproduced prior works, if validated, advance a central topic to information retrieval (a topic with broad applicability or focused on a hot research area)?
- What is the impact of the original paper? Is it central or marginal to the community?
- Is the evaluation methodology in line with the research challenges addressed by the reproduced experiment?
- Are the selected baselines representative of the several algorithm types and techniques available?
- Is the parameter/hyperparameter setting properly described?
- Are algorithms and baselines adequately tuned?
- Are the code and datasets used to reproduce the experiments available to the reviewers at the time of review?
- Is the shared material released in a permanent repository for easy access by researchers?
- Are the reproduced experiments well documented, with all the details required for other researchers to reproduce the experiments?
- Are there discrepancies between what is described in the paper and what is available in the shared material?
- Is the shared material complete with everything you need to replicate the experiments exactly?
Reproducibility track paper submission: October 1, 2020, 11:59 pm (AoE)
Notification: December 1, 2020
Camera-ready copy: January 10, 2021
Main Conference: March 28 – April 1, 2021
Reproducibility Track Chairs:
Maria Maistro, University of Copenhagen, DK
Gianmaria Silvello, Università di Padova, IT