Reviewing Guidelines

Home » Reviewing Guidelines

Thank you for agreeing to serve as a reviewer or area chair for ML4H 2024. This page provides an overview of responsibilities and key dates. If you are participating in the Reviewer Mentorship Program as either a mentor or a mentee, please check this page for details.

The bottom of this page contains more detailed information for reviewers and for area chairs. Please be sure to read those sections as well.

Questions?

Please reach us at info@ml4h.cc and we will respond as soon as possible.

Timeline

  • Fill out the questionnaire: Sept 1, 2024 (“OpenReview registration”)
  • Submission deadline: Sept 6, 2024 Sept 11, 2024
  • Reviews assigned: Sept 12, 2024
  • Reviews due: Oct 4, 2024
  • Reviews available to authors: Oct 11, 2024
  • Author response period ends: Oct 18, 2024
  • Meta-reviewing period ends: Oct 25, 2024
  • Meta-reviews due: Oct 26, 2024
  • Decision released: Nov 1, 2024

Expected Workload

We’ve tried our best to grow a large pool of reviewers, so we expect that each reviewer/AC will be assigned a reasonable number of submissions. Please help us keep this number small by completing your assigned reviews on time — and if you can complete them early, all the better!

ML4H 2024 considers submissions for two distinct tracks: Proceedings track (8 pages) or Findings track (4 pages). Each reviewer/AC will be assigned submissions from only one track.

  • We expect each reviewer to be assigned papers from either the Proceedings track or the Findings track (up to 3 Proceedings submissions, or up to 4 Findings submissions).
  • We expect each AC to be assigned 6-10 submissions from either the Proceedings track or Findings track.

Confidentiality

All aspects of the review process are confidential. Do not discuss or distribute the submissions or reviews or use any ideas from the submissions in your own work until they are publicly available.

Questionnaire

After accepting the invitation, you will be asked to fill out a “registration form” on OpenReview (for reviewers and for area chairs). Please complete the registration process by TBD, 2024 so that we have the relevant information when assigning submissions to reviewers.

Reviewer Assignment Process

Authors will be asked to label each paper with a set of data modalities and subject areas, and each reviewer/AC has been asked to list their data modalities and subject areas of expertise in the registration form. This information will be used to assign reviewers/ACs to the most relevant papers possible. The assignment will also account for the number of submissions we receive for each track as well as past reviewing experience.

Expertise Alignment & Conflicts

Once you receive your assigned submissions (TBD, 2024), please check ASAP that your assigned papers do not contain any conflicts of interest and are within your areas of expertise. You should not recognize any paper you are reviewing as work done by someone you work closely with. If you notice a conflict or if you are not confident that you can review the submission, report it immediately to the organizers by emailing info@ml4h.cc.

Format & Anonymity Violations

All submissions should be anonymous and formatted using the ML4H 2024 LaTeX templates (Proceedings and Findings). Proceedings papers should have their main content limited to 8 pages, and Findings papers should be limited to 4 pages. Please note the page limit includes figures and text but excludes references and appendices. However, reviewers should not feel obligated to read any supplementary material. Your time is precious!

Our reviewing process is double-blind. The authors do not know the reviewers’ identities, and the reviewers do not know the authors’ identities. For a particular submission, the assigned reviewers and area chairs have visibility of each other’s identities. Of course, no process is perfect: the reviewers might be able to guess the authors’ identities based on the dataset or the approaches used, or by technical reports posted on the internet. As a reviewer, we expect you will not actively attempt to discover the identity of the authors. We also caution you against assuming that you’ve discovered an author’s identity based on a dataset or approach: multiple independent inventions are common, and different groups often share datasets.

In general, we’d rather accept good work than nitpick minor formatting issues (especially because some clinicians may not be familiar with LaTeX). However, major formatting violations (e.g. a 20-page journal-like submission) would be grounds for rejection. Before assigning reviews, the organizing committee will conduct an initial check and perform desk rejection on violating cases. If you notice or suspect a major violation in your assigned papers, please report it to the organizers by emailing info@ml4h.cc.

What counts as a good ML4H Proceedings/Findings Paper?

Similar to prior years, we offer two submission tracks at ML4H: a formal Proceedings track as well as a non-archival Findings track. Our intention of providing two tracks is to establish ML4H as a strong venue for publishing excellent work at the intersection of machine learning and health while also providing a forum for insightful discussion of creative and probing ideas. While both tracks require high-quality submissions, the criteria for acceptance are distinct — the Proceedings track is meant for polished work with technical sophistication and clear impact in health, whereas the Findings track is for non-archival and thus should be judged on their likelihood to lead to a good discussion at the workshop. Please refer to the writing guidelines document for exemplary submissions in each track.

When should a paper switch to the Findings track?

Works submitted to the Proceedings track can be considered for conversion to the Findings track. If you are reviewing a Proceedings paper and think that the work would be strong enough to warrant consideration in the Findings track, make sure to select that option in the review form on OpenReview. Keep in mind that Findings track submissions are non-archival, and the main mode of presentation is through a poster during the symposium. The primary goal of the Findings track is to generate productive and interesting discussion amongst conference attendees. Creative, insightful, and/or potentially divisive contributions (even if less fully developed or not performant) make great additions to the Findings track. As a result, a Findings paper may not necessarily meet the same level of technical sophistication as a paper, as long as it meets the criteria of generating productive and interesting discussions.


For Reviewers

Main Responsibilities

  1. By Sept 1, 2024: Fill out the reviewer registration on OpenReview
  2. Sept 12, 2023: Check assigned submissions for expertise alignment and possible conflicts
  3. Sept 12, 2024 – Oct 4, 2024: Write reviews
  4. Oct 11, 2024 – Oct 25, 2024: Participate in discussions with authors and area chairs. Most author responses and discussions should occur before Oct 25, but we will allow comments on OpenReview after Oct 25 until decisions are finalized and released (Nov 1). 

Goals of the Reviewing Process

There are two complementary goals of the reviewing process. One goal is to provide actionable, constructive, and respectful feedback to the authors that will help them improve the work, both through positive recognition of the paper’s strengths and targeted commentary on opportunities for improvement. The other goal is to help meta-reviewers and organizers make decisions such that the highest-impact work is identified and accepted.

This year, ML4H also features a dedicated consensus-building discussion period. In this period, we expect all reviewers to engage with one another and their meta-reviewer to present a unified consensus. We expect reviewers to consider the author’s rebuttal, align with other reviewers on commonly identified strengths and weaknesses, and update their review in light of these discussions. The goal of this period is not to induce homogeneity, but rather to build consensus and provide a unified recommendation to the author. If you feel after this period that your beliefs about the quality of the work are not reflected by other reviewers, it is acceptable and encouraged to present these beliefs in your final review, provided you have made a good-faith effort to engage with other reviewers and have been open to other perspectives or arguments. Meta-reviewers are encouraged to seek consensus in this period, so you will be asked by your meta-reviewer to justify your stance if you disagree with the majority of reviewers, and failing to do so may result in your review not being featured as heavily in the meta-reviewers overall recommendation.

How can I write a good review?

There are numerous external resources that provide helpful advice on writing reviews. To get a sense of what a good review might look like, the 2017 ACL PC chairs blog contains some useful advice. To understand the difference between a negative and a constructive review, check out ICML 2023 Reviewer Tutorial or the “Review Content” section of NeurIPS 2020 reviewer guidelines (for inspiration only; follow the ML4H reviewer guidelines for any specific instructions).


For Area Chairs

Main Responsibilities

  1. By Sept 1, 2024: Fill out the AC registration on OpenReview
  2. Sept 12, 2024 – Oct 4, 2024: Check assigned submissions for expertise alignment and possible conflicts. Notify the organizers ASAP if you are not comfortable meta-reviewing any assigned papers.
    • You are not required to handle reviewer reassignments due to conflicts or expertise alignment. Reviewers have been instructed to report such cases directly to the organizers. 
  3. By Oct 11, 2024: Examine all reviews submitted across your papers and flag any that should be replaced with an emergency review due to poor quality.
    • ACs are not expected to read all papers in depth or develop their own opinions about each work. Instead, this stage is an opportunity to flag reviews that are unambiguously lacking sufficient content to motivate a recommendation (e.g., the review is one sentence), lacking sufficient expertise to motivate trust (e.g., the reviewer states “I don’t know enough about this to review”), or are incomprehensible or severely inconsistent (e.g., the free-text says that they love the work, with no cons identified, but the score is a strong reject). Flagged reviews will then be replaced by an emergency review from a new reviewer.
  4. Oct 11, 2024 – Oct 18, 2024: Lead discussions with authors and reviewers.
    • Drive the formal discussion period and align reviewers to form updated, consensus reviews for their submissions. Reviewers should confront any inter-reviewer disagreements, respond to any points raised in author rebuttals, and be encouraged to update their scores if other reviewers or authors address previously identified major concerns during this process. The final output of this process should be a set of unified reviews. Note that the intent of this period is not to provide homogenous reviews, but rather to ensure that all reviews improve by taking into account the perspectives of other reviewers and that (insofar as is possible) reviewers can align on a consensus decision.
  5. Oct 11, 2024 – Oct 25, 2024: Meta-reviewing period. This overlaps with the reviewer discussion period. You are welcome to submit meta-reviews early, or solicit additional comments from reviewers when necessary. 
  6. By Oct 26, 2024: Submit meta-reviews. 

Goals for Meta-reviewing Process

  1. Drive the consensus-building discussion period amongst reviewers such that reviewers
    • Align on an overall recommendation, such that (ideally) all final reviews and meta-reviews present a unified recommendation and aligned set of scores.
    • Appropriately consider and address any new information offered in the author response.
  2. Write a meta-review that should:
    • Summarize the opinions of the reviewers to aid in final decision making.
    • Provide a summary of the reviews’ salient points for the author, focusing specifically on what critical aspects would be needed to improve the submission for acceptance in the future and/or what key strengths led the meta-reviewer to recommend acceptance.
  3. Flag low-quality reviews to give reviewers feedback on their review quality.
  4. Highlight high-quality papers and reviews for consideration for paper awards (more information coming soon) and Top Reviewer awards.