Thank you for agreeing to serve as a reviewer or area chair for ML4H 2024. This page provides an overview of responsibilities and key dates. If you are participating in the Reviewer Mentorship Program as either a mentor or a mentee, please check this page for details.
The bottom of this page contains more detailed information for reviewers and for area chairs. Please be sure to read those sections as well.
Please reach us at info@ml4h.cc and we will respond as soon as possible.
We’ve tried our best to grow a large pool of reviewers, so we expect that each reviewer/AC will be assigned a reasonable number of submissions. Please help us keep this number small by completing your assigned reviews on time — and if you can complete them early, all the better!
ML4H 2024 considers submissions for two distinct tracks: Proceedings track (8 pages) or Findings track (4 pages). Each reviewer/AC will be assigned submissions from only one track.
All aspects of the review process are confidential. Do not discuss or distribute the submissions or reviews or use any ideas from the submissions in your own work until they are publicly available.
After accepting the invitation, you will be asked to fill out a “registration form” on OpenReview (for reviewers and for area chairs). Please complete the registration process by TBD, 2024 so that we have the relevant information when assigning submissions to reviewers.
Authors will be asked to label each paper with a set of data modalities and subject areas, and each reviewer/AC has been asked to list their data modalities and subject areas of expertise in the registration form. This information will be used to assign reviewers/ACs to the most relevant papers possible. The assignment will also account for the number of submissions we receive for each track as well as past reviewing experience.
Once you receive your assigned submissions (TBD, 2024), please check ASAP that your assigned papers do not contain any conflicts of interest and are within your areas of expertise. You should not recognize any paper you are reviewing as work done by someone you work closely with. If you notice a conflict or if you are not confident that you can review the submission, report it immediately to the organizers by emailing info@ml4h.cc.
All submissions should be anonymous and formatted using the ML4H 2024 LaTeX templates (Proceedings and Findings). Proceedings papers should have their main content limited to 8 pages, and Findings papers should be limited to 4 pages. Please note the page limit includes figures and text but excludes references and appendices. However, reviewers should not feel obligated to read any supplementary material. Your time is precious!
Our reviewing process is double-blind. The authors do not know the reviewers’ identities, and the reviewers do not know the authors’ identities. For a particular submission, the assigned reviewers and area chairs have visibility of each other’s identities. Of course, no process is perfect: the reviewers might be able to guess the authors’ identities based on the dataset or the approaches used, or by technical reports posted on the internet. As a reviewer, we expect you will not actively attempt to discover the identity of the authors. We also caution you against assuming that you’ve discovered an author’s identity based on a dataset or approach: multiple independent inventions are common, and different groups often share datasets.
In general, we’d rather accept good work than nitpick minor formatting issues (especially because some clinicians may not be familiar with LaTeX). However, major formatting violations (e.g. a 20-page journal-like submission) would be grounds for rejection. Before assigning reviews, the organizing committee will conduct an initial check and perform desk rejection on violating cases. If you notice or suspect a major violation in your assigned papers, please report it to the organizers by emailing info@ml4h.cc.
Similar to prior years, we offer two submission tracks at ML4H: a formal Proceedings track as well as a non-archival Findings track. Our intention of providing two tracks is to establish ML4H as a strong venue for publishing excellent work at the intersection of machine learning and health while also providing a forum for insightful discussion of creative and probing ideas. While both tracks require high-quality submissions, the criteria for acceptance are distinct — the Proceedings track is meant for polished work with technical sophistication and clear impact in health, whereas the Findings track is for non-archival and thus should be judged on their likelihood to lead to a good discussion at the workshop. Please refer to the writing guidelines document for exemplary submissions in each track.
Works submitted to the Proceedings track can be considered for conversion to the Findings track. If you are reviewing a Proceedings paper and think that the work would be strong enough to warrant consideration in the Findings track, make sure to select that option in the review form on OpenReview. Keep in mind that Findings track submissions are non-archival, and the main mode of presentation is through a poster during the symposium. The primary goal of the Findings track is to generate productive and interesting discussion amongst conference attendees. Creative, insightful, and/or potentially divisive contributions (even if less fully developed or not performant) make great additions to the Findings track. As a result, a Findings paper may not necessarily meet the same level of technical sophistication as a paper, as long as it meets the criteria of generating productive and interesting discussions.
There are two complementary goals of the reviewing process. One goal is to provide actionable, constructive, and respectful feedback to the authors that will help them improve the work, both through positive recognition of the paper’s strengths and targeted commentary on opportunities for improvement. The other goal is to help meta-reviewers and organizers make decisions such that the highest-impact work is identified and accepted.
This year, ML4H also features a dedicated consensus-building discussion period. In this period, we expect all reviewers to engage with one another and their meta-reviewer to present a unified consensus. We expect reviewers to consider the author’s rebuttal, align with other reviewers on commonly identified strengths and weaknesses, and update their review in light of these discussions. The goal of this period is not to induce homogeneity, but rather to build consensus and provide a unified recommendation to the author. If you feel after this period that your beliefs about the quality of the work are not reflected by other reviewers, it is acceptable and encouraged to present these beliefs in your final review, provided you have made a good-faith effort to engage with other reviewers and have been open to other perspectives or arguments. Meta-reviewers are encouraged to seek consensus in this period, so you will be asked by your meta-reviewer to justify your stance if you disagree with the majority of reviewers, and failing to do so may result in your review not being featured as heavily in the meta-reviewers overall recommendation.
There are numerous external resources that provide helpful advice on writing reviews. To get a sense of what a good review might look like, the 2017 ACL PC chairs blog contains some useful advice. To understand the difference between a negative and a constructive review, check out ICML 2023 Reviewer Tutorial or the “Review Content” section of NeurIPS 2020 reviewer guidelines (for inspiration only; follow the ML4H reviewer guidelines for any specific instructions).