Reviewer Guidelines

Learning on Graphs Conference, 2023

Thank you for agreeing to serve as a reviewer for LoG 2023!

Table of Contents

Contact Information

The Area Chair (AC) assigned to a paper should be your first point of contact for that paper. You can contact the AC by leaving a comment in OpenReview with the AC as a reader. (PCs will also be listed as readers, but will not be notified.)

If you encounter a situation that you are unable to resolve with your AC, please contact the Program Chairs at pcs@logconference.org. Please refrain from writing to the Program Chairs at their own email addresses.


Important Dates

  • Abstract Submission Stage: 12 July - 11 August
  • Paper Submission Stage: 11 August - 21 August
  • Reviewer Bidding Stage: 21 August - 27 August
  • Reviewer Assignment Stage: 28 August - 31 August
  • Reviewing: 1 September - 30 September
  • Author Rebuttal Stage: 7 October - 20 October
  • Reviewer-Author Discussion Stage: 20 October - 29 October
  • Reviewer-AC Discussion Stage: 29 October - 7 November
  • AC-PC Discussion Stage: 7 November - 13 November
  • Author Notifications: 13 November

Main Tasks

Fulfilling your responsibilities as a reviewer in a high-quality and timely manner is critical to the success of the review process. Here is a list of key tasks for reviewers:

(1) Preparation:

  • Read and agree to abide by the LoG code of conduct.
  • LoG 2023 is using OpenReview. Please make sure that your OpenReview profile is up to date. If you have changed or plan to change your email address, please update the address set as “preferred” in your OpenReview profile and confirm it. It is crucial that we are able to reach you quickly. We will send most emails from OpenReview (noreply@openreview.net). Such emails are sometimes marked as spam (or classified as Updates in Gmail). Please check these folders regularly and whitelist noreply@openreview.net.
  • Note that your assignments and tasks will appear at the reviewer console in OpenReview: URL.
  • Read what constitutes a conflict of interest and how to declare them in your profile. This is the same as for NeurIPS 2022.

(2) Check paper assignments:

  • As soon as you are notified of papers to review, you should log in to OpenReview to check for conflicts and to check that papers fall within your area of expertise.
  • If you do not feel qualified to review a paper that was assigned to you, please communicate this to your AC right away.
  • These assignments may change during the first week as some reviewers and ACs request re-assignments. Please watch for notification emails from Openreview.

(3) Writing the review:

  • Detailed information on what points to address in your review is in the “Review Form” section below, together with the “Review Examples” below.
  • We know that serving as a reviewer for LoG is time-consuming, but the community depends on your high-quality reviews to uphold the scientific quality of LoG.
  • Please make your review as informative and substantiated as possible; superficial, uninformed reviews are worse than no review as they may contribute noise to the review process.
  • Make sure to flag papers with ethical concerns. You may refer to the NeurIPS ethics guidelines.
  • The LaTeX template and page limit should be followed. If you notice severe violations of the required format (e.g., papers that exceed the page limit or change the font size), please immediately report them to your AC.
  • When writing your review, please keep in mind that after decisions have been made, reviews and meta-reviews of accepted papers as well as your discussion with the authors, will be made public (reviewer and AC identities will remain anonymous). For authors of rejected papers it is optional to make this information public.

(4) Respond to author rebuttals: 7 October - 7 November.

  • Author Rebuttal (Oct 7 - Oct 20): Authors will be given seven days to write a rebuttal. You can always comment on OpenReview, but in this phase, you cannot change your score.
  • Author-Reviewer Discussions (Oct 20 - Oct 29): From now on, you can change your scores. Carefully read the author’s rebuttals, respond to the rebuttals, and change your scores as you see fit. Please engage in an open exchange with the authors and the other reviewers. Reading the other reviews may be helpful.
  • Reviewer-AC Discussions (Oct 29 - Nov 7): Please discuss the paper, the reviews, and the author’s responses with the other reviewers and with the Area Chair. The Area Chairs will be writing their metareviews and eliciting further comments and clarifications from the reviewers. Please help your AC in their metareview writing.
  • Participating in discussions is a critical part of your role as a reviewer, and we depend on you to take the discussions seriously. If your paper evaluation has changed, please comment on this on OpenReview and adjust your score.

(5) ACs make initial accept/reject recommendations with PCs: 7 November - 13 November

  • Your workload during this period should be light, but if ACs come back to you with additional questions, please respond promptly.

(6) Paper accept/reject notifications: 13 November.


Reviewing a submission: step-by-step

“Review the papers of others as you would wish your own to be reviewed”

A review aims to determine whether a submission will bring sufficient value to the community and contribute new knowledge. The process can be broken down into the following main reviewer tasks:

  1. Reading the paper: Be sure to invest sufficient time to entirely understand the paper and look up related work that will help you evaluate it.

  2. While reading, consider the following:

    • Objective of the work: What is the goal of the paper? This could, e.g., be addressing an application or problem, drawing attention to a new application or problem, or introducing and/or explaining a new theoretical finding. Different objectives require different aspects of the paper.
    • Strong points: is the submission clear, technically correct, experimentally rigorous, reproducible, does it present novel findings (e.g., theoretically, algorithmically, etc.)?
    • Weak points: is it weak in any of the aspects listed above?
    • Be mindful of potential biases and try to be open-minded about the value and interest a paper can hold for the entire LoG community, even if it may not be very interesting for you.
  3. Answer three key questions for yourself to make a recommendation to Accept or Reject:

    • What is the specific question and/or problem tackled by the paper?
    • Is the approach well motivated, including being well-placed in the literature?
    • Does the paper support the claims? This includes determining if results, whether theoretical or empirical, are correct and if they are scientifically rigorous.
  4. Write your initial review, organizing it as follows (below are example reviews):

    • Summarize what the paper claims to contribute. Be positive and generous.
    • List strong and weak points of the paper.
    • Additionally, clearly state one or two key reasons for your score.
    • Provide supporting arguments for your score.
    • Ask questions you would like answered by the authors to help you clarify your understanding of the paper and provide the additional evidence you need to be confident in your assessment.
    • Provide feedback to improve the paper. Make it clear why the points are important and how they would impact your score. Do not request unhelpful experiments.
  5. General points to consider:

    • Be polite in your review. This is crucial for having a productive discussion that provides all insights necessary to evaluate the paper and improve it.
    • Explicitly state where you are uncertain and what you do not quite understand. The authors may be able to resolve this in their response.
    • Submissions with significant contributions in either technical aspects or empirical aspects should be given high priority for acceptance.
    • Be precise and concrete. For example, include references to back up any claims, especially claims about novelty and prior work.
    • Provide constructive feedback.
    • Don’t reject a paper just because you do not find it interesting. The research community is big, and you should consider that the paper might be valuable to someone else.
    • Be careful when you raise concerns about novelty: many reviewers regularly mistake complexity, difficulty, and technicality for novelty. Consider reading this blog: Novelty in Science | Perceiving Systems Blog (perceiving-systems.blog)
  6. Engage in discussion: During the discussion phase, reviewers, authors, and Area Chairs engage in asynchronous discussion. Authors can revise their submissions to address concerns that arise. It is crucial that you actively engage and respond, i.e., you should be able to respond to comments/requests within 3 business days.

  7. Provide final recommendation: Update your review, taking into account the new information collected during the discussion phase and any revisions to the submission. Maintain a spirit of openness to changing your initial recommendation (either to a more positive or more negative) rating.


Review Form

Below is a description of the questions you will be asked on the review form for each paper and some guidelines on what to consider when answering these questions. Feel free to use the LoG paper checklist included in each paper as a tool when preparing your review (some submissions may have the checklist as part of the supplementary materials). Remember that answering “no” to some questions is typically not grounds for rejection. When writing your review, please keep in mind that after decisions have been made, reviews and meta-reviews of accepted papers and opted-in rejected papers will be made public.

  1. Main Review: Write your review comments here. Be sure to include:

    • Summarize the contributions of this work
    • List strong and weak points of the paper.
    • Clearly state your recommendation (accept or reject) with key reasons.
    • Provide supporting arguments for your recommendation.
    • Ask questions to authors to help you clarify your understanding of the paper.
    • Provide additional feedback with the aim to improve the paper.
    • Please refer to the reviewer guidelines (this page).
  2. Overall Recommendation: Please provide an overall score for this submission.

    • 10: Strong accept. An award-worthy paper that presents a theoretical breakthrough or a novel solution to an important research problem with solid experiments.
    • 8: Clear accept. A strong paper with novel ideas, flawless evaluation, resources, and reproducibility, and no ethical concerns.
    • 6: Weak accept. A good paper where merits, e.g., significance, slightly weigh over weaknesses, e.g., poor presentation.
    • 5: Weak reject. A fair paper where weaknesses, e.g., limited technical contributions, slightly weigh over merits, e.g., solid experiments.
    • 3: Clear reject. A paper with moderate to major weaknesses, e.g., technical flaws, weak evaluations, inadequate reproducibility, and incompletely addressed ethical considerations.
    • 1: Strong reject. A must-reject paper with trivial/already known results, technical flaws, wrong evaluations, or unaddressed ethical considerations.
  3. Confidence: Please provide a score for your assessment of this submission to indicate how confident you are in your evaluation.

    • 5: You are absolutely certain about your assessment. You are very familiar with the related work and checked the math/other details carefully.
    • 4: You are confident in your assessment but not absolutely certain. It is unlikely, but not impossible, that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work.
    • 3: You are fairly confident in your assessment. It is possible that you did not understand some parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
    • 2: You are willing to defend your assessment, but it is quite likely that you did not understand the central parts of the submission or that you are unfamiliar with some pieces of related work. Math/other details were not carefully checked.
    • 1: Your assessment is an educated guess. The submission is not in your area or the submission was difficult to understand. Math/other details were not carefully checked.
  4. Ethical Flag: If there are ethical issues with this paper, please flag the paper for an ethics review. For guidance on when this is appropriate, please review the NeurIPS ethics guidelines.


Other Roles

During the review process you will be working with:

  • Area Chairs (ACs). ACs are the principal contact for reviewers during the whole reviewing process. ACs are responsible for recommending reviewers for submissions, ensuring that all submissions receive quality reviews, facilitating discussions among reviewers, writing meta-reviews, evaluating the quality of reviews, and making decision recommendations.

Best Practices

  • Be thoughtful. The paper you are reviewing may have been written by a first-year graduate student who is submitting to a conference for the first time and you don’t want to crush their spirits. Be fair. Do not let personal feelings affect your review.
  • Be useful. A good review is useful to all parties involved: authors, other reviewers and ACs. Try to keep your feedback constructive when possible.
  • Be specific. Do not make vague statements in your review, as they are unfairly difficult for authors to address.
  • Be flexible. The authors may address some points you raised in your review during the discussion period. Make an effort to update your understanding of the paper when new information is presented, and revise your review to reflect this.
  • Be timely. Please respect the deadlines and respond promptly during the discussion. If you cannot complete your review on time, please let the AC know as soon as possible. An early submission of a review is highly appreciated!
  • If someone pressures you into providing a positive or negative review for a submission, please notify Program Chairs right away (pcs@logconference.org).
  • If you notice unethical or suspect behavior, please notify your AC right away.

Policies

Please make sure to review the policies in the LoG 2023 Call for Papers.

Confidentiality

You must keep everything relating to the review process confidential. Do not use ideas, code, or results from submissions in your own work until they become publicly available. Do not talk about or share submissions with anyone without prior approval from the Program Chairs. Code submitted for reviewing cannot be distributed or used for any other purpose.

Double-blind reviewing

The reviewing process is double blind at the level of reviewers and ACs (i.e., reviewers and ACs cannot see author identities) but not at the level of Program Chairs (Program Chairs can see identities of everyone). Authors are responsible for anonymizing their submissions (we recommend using https://anonymous.4open.science/ to anonymize GitHub repositories). Submissions may not contain any identifying information that may violate the double-blind reviewing policy. This policy applies to any supplementary or linked material as well, including code. If you are assigned a submission that is not adequately anonymized, please contact the corresponding AC. Please do not attempt to find out the identities of the authors for any of your assigned submissions (e.g., by searching on arXiv). This would constitute an active violation of the double-blind reviewing policy.

Formatting instructions

As a reminder, full-paper submissions are limited to 9 main pages and extended abstracts to 4 main pages. This includes all figures and tables. Additional pages containing references, an appendix, and the LoG 2023 paper checklist are allowed. In general, we were lenient with minor formatting violations (e.g., a spillover to page 10 or tables that are not in the LoG style), as long as these violations can be easily rectified in the final version. If you find violations that are not easily rectified without causing other presentation issues, please flag them to your AC. Some submissions may have included the LoG 2023 checklist into their supplementary material by mistake, so you may find the checklist there (to be viewed at your discretion).

Dual submissions

For the full 9-page paper archival submissions track, LoG does not allow submissions that are identical or substantially similar to papers that are in submission to, have been accepted to, or have been published in other archival venues. Submissions that are identical or substantially similar to other LoG submissions fall under this policy as well; all LoG submissions should be distinct and sufficiently substantial. Slicing contributions too thinly is discouraged, and may fall under the dual submission policy. If you suspect that a submission that has been assigned to you is a dual submission or if you require further clarification, please contact the corresponding AC.


Review examples

Below are two reviews, copied verbatim from previous ICLR conferences, that adhere well to our guidelines above: one for an “Accept” recommendation, and the other for a “Reject” recommendation. Note that while each review is formatted differently according to each reviewer’s style, both reviews are well-structured and therefore easy to navigate.

Example 1: Recommendation to accept

##########################################################################

Summary:

The paper provides an interesting direction in the meta-learning field. In particular, it proposes to enhance meta learning performance by fully exploring relations across multiple tasks. To capture such information, the authors develop a heterogeneity-aware meta-learning framework by introducing a novel architecture–meta-knowledge graph, which can dynamically find the most relevant structure for new tasks.

##########################################################################

Reasons for score:

Overall, I vote for accepting. I like the idea of mining the relation between tasks and handle it by the proposed meta-knowledge graph. My major concern is about the clarity of the paper and some additional ablation models (see cons below). Hopefully the authors can address my concern in the rebuttal period.

##########################################################################

Pros:

  1. The paper takes one of the most important issues of meta-learning: task heterogeneity. For me, the problem itself is real and practical.

  2. The proposed meta-knowledge graph is novel for capturing the relation between tasks and address the problem of task heterogeneity. Graph structure provides a more flexible way of modeling relations. The design for using the prototype-based relational graph to query the meta-knowledge graph is reasonable and interesting.

  3. This paper provides comprehensive experiments, including both qualitative analysis and quantitative results, to show the effectiveness of the proposed framework. The newly constructed Art-Multi dataset further enhances the difficulty of tasks and makes the performance more convincing.

##########################################################################

Cons:

  1. Although the proposed method provides several ablation studies, I still suggest the authors to conduct the following ablation studies to enhance the quality of the paper: (1) It might be valuable to investigate the modulation function. In the paper, the authors compare sigmoid, tanh, and Film layer. Can the authors analyze the results by reducing the number of gating parameters in Eq. 10 by sharing the gate value of each filter in Conv layers? (2) What is the performance of the proposed model by changing the type of aggregators?

  2. For the autoencoder aggregator, it would be better to provide more details about it, which seems not very clear to me.

  3. In the qualitative analysis (i.e., Figure 2 and Figure 3), the authors provide one visualization for each task. It would be more convincing if the authors can provide more cases in the rebuttal period.

##########################################################################

Questions during rebuttal period:

Please address and clarify the cons above

#########################################################################

Some typos:

(1) Table 7: I. no sample-level graph -> I. no prototype-based graph

(2) 5.1 Hyperparameter Settings: we try both sigmoid, tanh Film -> we try both sigmoid, tanh, Film.

(3) parameteric -> parametric

(4) Table 2: Origninal -> original

(5) Section 4 first paragraph: The enhanced prototype representation -> The enhanced prototype representations

Updates: Thanks for the authors’ response. The newly added experimental results address my concerns. I believe this paper will provide new insights for this field and I recommend this paper to be accepted.

Example 2: Recommendation to reject

Review: This paper proposes Recency Bias, an adaptive mini batch selection method for training deep neural networks. To select informative minibatches for training, the proposed method maintains a fixed size sliding window of past model predictions for each data sample. At a given iteration, samples which have highly inconsistent predictions within the sliding window are added to the minibatch. The main contribution of this paper is the introduction of a sliding window to remember past model predictions, as an improvement over the SOTA approach: Active Bias, which maintains a growing window of model predictions. Empirical studies are performed to show the superiority of Recency Bias over two SOTA approaches. Results are shown on the task of (1) image classification from scratch and (2) image classification by fine-tuning pretrained networks.

+ves:

  • The idea of using a sliding window over a growing window in active batch selection is interesting.
  • Overall, the paper is well written. In particular, the Related Work section has a nice flow and puts the proposed method into context. Despite the method having limited novelty (sliding window instead of a growing window), the method has been well motivated by pointing out the limitations in SOTA methods.
  • The results section is well structured. It’s nice to see hyperparameter tuning results; and loss convergence graphs in various learning settings for each dataset.

Concerns:

  • The key concern about the paper is the lack of rigorous experimentation to study the usefulness of the proposed method. Despite the paper stating that there have been earlier work (Joseph et al., 2019 and Wang et al., 2019) that attempt mini-batch selection, the paper does not compare with them. This is limiting. Further, since the proposed method is not specific to the domain of images, evaluating it on tasks other than image classification, such as text classification for instance, would have helped validate its applicability across domains.

  • Considering the limited results, a deeper analysis of the proposed method would have been nice. The idea of a sliding window over a growing window is a generic one, and there have been many efforts to theoretically analyze active learning over the last two decades. How does the proposed method fit in there? (For example, how does the expected model variance change in this setting?) Some form of theoretical/analytical reasoning behind the effectiveness of recency bias (which is missing) would provide greater insights to the community and facilitate further research in this direction.

  • The claim of 20.5% reduction in test error mentioned in the abstract has not been clearly addressed and pointed out in the results section of the paper.

  • On the same note, the results are not conclusively in favor of the proposed method, and only is marginally better than the competitors. Why does online batch perform consistently than the proposed method? There is no discussion of these inferences from the results.

  • The results would have been more complete if results were shown in a setting where just recency bias is used without the use of the selection pressure parameter. In other words, an ablation study on the effect of the selection pressure parameter would have been very useful.

  • How important is the warm-up phase to the proposed method? Considering the paper states that this is required to get good estimates of the quantization index of the samples, some ablation studies on reducing/increasing the warm-up phase and showing the results would have been useful to understand this.

  • Fig 4: Why are there sharp dips periodically in all the graphs? What do these correspond to?

  • The intuition behind the method is described well, however, the proposed method would have been really solidified if it were analysed in the context of a simple machine learning problem (such as logistic regression). As an example, verifying if the chosen minibatch samples are actually close to the decision boundary of a model (even if the model is very simple) would have helped analyze the proposed method well.

Minor comments:

  • It would have been nice to see the relation between the effect of using recency bias and the difficulty of the task/dataset.
  • In the 2nd line in Introduction, it should be “deep networks” instead of “deep networks netowrks”.
  • Since both tasks in the experiments are about image classification, it would be a little misleading to present them as “image classification” and “finetuning”. A more informative way of titling them would be “image classification from scratch” and “image classification by finetuning”.
  • In Section 3.1, in the LHS of equation 3, it would be appropriate to use P(y_i/x_i; q) instead of P(y/x_i; q) since the former term was used in the paragraph.

=====POST-REBUTTAL COMMENTS========

I thank the authors for the response and the efforts in the updated draft. Some of my queries were clarified. However, unfortunately, I still think more needs to be done to explain the consistency of the results and to study the generalizability of this work across datasets. I retain my original decision for these reasons.