May 10, 2018

1. Rationale of this “Community Proposal”

The main findings of the R2 community feedback were that, of the respondents:
~74% believe in-person PC meetings are important
~50% feel that PC members are overloaded
~50% would like to see ISCA tied to a journal
~50% believe integrity and fairness are paramount

This proposal uses this feedback to suggest a cross-conference change to the reviewing process to decrease the load of the reviewers, while increasing the number of good papers that are accepted per year to top architecture conferences.

The hypothesis of this proposal is that a very large fraction of papers being reviewed are resubmissions, many with minor changes. A paper gets different reviewers every time,
who provide different feedback. This situation frustrates authors and reviewers.

This proposal’s goal is to help get these papers accepted (or rejected) by providing a robust Revision Process. The load of the reviewers will decrease because they will only review the incremental changes since the last review, and the total number of papers resubmitted will go down.

A revision-based model gives authors the opportunity to address reviewer comments and be re-evaluated by the same reviewers. Moreover, it opens a path to journal publication.

2. Detailed Proposal

The proposal is to coordinate the review process for the ISCA, HPCA, and MICRO conferences. Papers submitted to a conference may be (1) accepted (possibly subject to some revision as already done in our community), (2) invited for a larger revision, or (3) rejected. A revised paper will be re-submitted with highlighted changes plus a summary of changes, the old paper, and the old reviews.

In the following, we outline an implementation, where a revised paper can be re-submitted to the deadlines of one of the next two conferences in the calendar year, and will be reviewed by the same set of reviewers.

Each conference has a Program Committee (PC) and an External Review Committee (ERC). Some PC members of one conference serve as ERC for the next two conferences, to re­-review revised papers. PC members review new submissions. ERC members review revised submissions. The PC always meets in­-person. The ERC meets via a conference call. The ERC reviewers need to agree on whether the authors have adequately addressed the review comments for the revised papers. The ERC reviewers make a recommendation to the PC to accept or reject a revised paper, but the final decision needs to be endorsed by the current PC. The PC Chair oversees the re­-review of the revised papers.

The PC Chair may choose to invite additional ERC members to broaden the pool of expertise in the ERC to assist with the review process of new submissions. The PC Chair may choose to delegate the oversight of the re­-review of a revised paper to an expert PC member. The PC member may be asked to present the revised paper at the in­-person PC meeting so that the entire PC is still engaged in the selection of the program.

This proposal satisfies the key findings of the R2 community feedback listed above. First, it decreases reviewing load, as PC members only need to review new submissions, and ERC members review incremental changes. Second, the PC reviewers always meet in-­person to make the final decision on a paper. Third, the quality and fairness of the review process improves because the authors have the opportunity to revise their paper and be re­-evaluated by
the same set of reviewers.

To prevent the blurring of the identity of individual conferences, each conference still retains full authority to accept/reject revised papers. Also, a given conference may encourage and/or discourage certain topics. This allows for technical diversity to thrive in our growing field.

Other implementations are possible. They may involve using only a subset of the same reviewers, and/or re-­submitting only to the same conference, to appear one year later.

3. Key Assets of this Proposal

1) It is inclusive of our entire community, helping all our conferences and our overall growing community of authors thrive. It allows for technical diversity to thrive.

2) It reduces the reviewer load, improving reviewer quality and reducing reviewer fatigue.

3) It retains in­-person PC meetings, which enhance the integrity of the reviewing process.

4) It supports a revision-­based review model, hopefully by the same set of reviewers, improving review fairness.

5) It paves the way for journal publication of all papers accepted at any of the three architecture conferences.

4. Further Action Items

This proposal welcomes feedback from the broad computer architecture community. Such feedback will determine how this proposal eventually shapes up.

This proposal requires cooperation between the multiple conferences. It is highly expected that the different sponsoring organizations will closely cooperate for the sake of ensuring a thriving and growing computer architecture community.

The IEEE CS and ACM Publications Boards will be approached for the creation of a jointly sponsored ACM/IEEE journal to publish papers accepted at any of the three conferences. This will increase the visibility of our community.

– Antonio Gonzalez, Daniel A. Jimenez, Hsien-Hsin Sean Lee, John Kim, Josep Torrellas, Lixin Zhang, Onur Mutlu, Per Stenstrom and Vijay Janapa Reddi


In addition, please leave your comments (optionally anonymous) for everyone else to see.

8 blog posts on “Proposal to Improve the Computer Architecture Conference Review Process

  • July 9, 2018 at 2:34 pm
    Permalink

    [Comment cross-posted from SIGARCH Blog]

    As noted in the “History” paragraph, a subset of the TCCA committee provided extensive critique of this concept document in May when it was first offered on the TCCA mailing list, in the hopes that the concept’s authors would provide greater clarity on what the proposal means and sketch how key implementation challenges might be addressed before opening the concept for public comment. The current concept is insufficiently detailed to enable careful analysis of its impact on architecture reviewing processes. Reproduced below is the original critique in its entirety, along with a link to a Google Doc version that allows public comment.

    https://docs.google.com/document/d/1SKqo-QGEj2Z464mF8zcwfv65HFqqqqEOyanwXYYOXQY/edit?usp=sharing

    ==

    Commentary on the substance of the “Paper Sharing Proposal”
    Thomas F. Wenisch
    May 27, 2018

    This analysis discusses the proposal to implement paper revisions and journalization for HPCA, ISCA, and MICRO by passing papers and reviews between the conferences. The document considers the proposal text that was sent to the TCCA EC on May 11, 2018, and is referred to throughout as the “paper-sharing proposal”.

    tl;dr:

    I share the goal of archiving our top conferences in a common journal. However, the present proposal (1) fails to identify target revision and accept rates, which makes the proposal impossible to rigorously analyze; (2) does not appear to meaningfully reduce reviewing load; and (3) faces daunting implementation challenges that, collectively, appear to me to be show-stoppers. I think there are better ways to get to the goal of a common journal, without the serious problems caused by passing papers and reviews between committees.

    I do not support publicizing the proposal until at least items [2][4][7][8] have been addressed.

    [1] Goal: Archive ISCA, HPCA, MICRO in a common journal

    I strongly share one of the top-line goals on the “paper-sharing” proposal. I believe HPCA, ISCA, and MICRO should all be archived in a common journal.

    Archiving our papers in a journal has significant advantages: (1) For our colleagues in Asia and Europe, there are still numerous entities that do not consider conference papers to be publications. (2) In Asia, impact factors—which Thompson Reuters will only create for journals—remain an important evaluation criterion that directly affect faculty compensation. I personally find this distasteful, but that is the world in which we live. (3) Even within the U.S., outside of CS, only journals “count”. For example, in NIH proposals, publications at our conferences carry no weight with the study panels. (4) Most journals will allow unlimited pages after the review process is complete; we could do this as well, if we want. (5) Papers can appear in the digital library immediately upon an accept decision, hence, the time from submission to publication can be substantially shorter than our current conference process, especially for papers that are accepted without revision or with minor revision.

    There are also advantages to having a *common* journal: (1) A common journal will likely have a higher impact factor. (2) A common journal will counterbalance any perception of difference in prestige among our conferences. (3) To provide an impact factor, T-R requires a journal to be distinct from any conference proceedings; placing two or more conferences in the same journal addresses this requirement. (4) With a single journal that includes only top-tier conferences, we may be able to convince Emery Berger to include the journal, rather than the individual conferences, in csrankings.org, which would remove one of the remaining public sources that reinforces HPCA as “lesser among equals”.

    There are also significant challenges to both a journal, and a common journal: (1) The most significant impediment will be dealing with the copyright and financial models of the ACM and IEEE. Fortunately, our community is following rather than leading in this regard—several communities are already working with these organizations to find a solution, though it is not yet clear what the solution will be or how long it will take to get there. ACM is well ahead of IEEE on this issue. (Note that, if HPCA went for a journal by itself, this problem goes away, since it is only sponsored by IEEE.) (2) Both organizations require a revision process to qualify a publishing venue as a journal. Notably, these rules require major and minor revision to be *decisions* with explicit guidance on the required changes, so a revision process like that of MICRO 48 would not qualify. (3) A journal must be distinct from any conference proceeding. I don’t fully understand this rule, but I know it was an impediment for SIGMETRICS to call their journal “Proceedings of SIGMETRICS”, they had to rename to the rather unattractive “POMACS – PACM Series on Measurement and Analysis of Computer Systems”

    Some notes: (1) Nothing above requires passing papers between conferences. It just requires memoranda of understanding between the sponsoring bodies (ACM, IEEE, and the various SIGs/TCs) and for each conference to individually meet the requirements. (2) It does not require all conferences to join at the same time; several communities (e.g., PL and METRICS) have staggered implementation of a common journal. Hence, we don’t need a “big bang” agreement for all the governing bodies to move at the same time.

    [2] Conceptual Problem: No Analysis of Revision and Accept Rates

    An enormous problem of the “paper-sharing” proposal is that it provides no guidance or analysis of the impact of the envisioned review process on either revision or acceptance rates. *** I will vote against publicly posting any proposal that fails to include such an analysis, since it will lead to confusion as to what the proposal really means. ***

    As I outlined in previous writing, if overall accept rates remain the same as today, its pretty easy to demonstrate (we all know Little’s Law) that either (1) very few (< 10%) papers will receive "revise" outcomes or (2) a large fraction of "revise" outcomes will lead to rejects. (1) implies that revisions have negligible impact on review load. (2) is a terrible outcome – we are raising the burden on authors to address revisions and then rejecting the paper anyway, which is a bad experience for the authors and will either (a) lead to a resubmission, undermining the goal of reducing load, or (b) forever ban work from appearing in our venues, potentially due to a bad review assignment, which seems patently unjust. The R2 proposal assumed revisions would be relatively rare (5-8% of submissions) and hence would not significantly impact overall review load, and that analysis was explicitly spelled out in the companion document.

    On the other hand, from some of the traffic on the TCCA EC list (notably from Josep and Antonio), I think some feel that the proposed revision model will improve paper quality and allow a higher overall accept rate, perhaps closer to 30%. That would allow a large number of revise outcomes with a high success rate on revisions. I find it likely that a higher accept rate would reduce reviewing burden whether or not we introduce a revision process. But, to me, that is a secondary effect – higher accept rates have critical implications for our conferences beyond reviewing quality. If we want higher accept rates, we should discuss that directly.

    The broader problem with the proposal is not whether we end up with the same or higher accept rates, it is that *it does not address the issue*. As my writing above shows, you can arrive at drastically different conclusions on what the proposal implies based on your assumptions on accept and revise rates. If we don't explicitly lay out assumptions, we will get totally random feedback from the community based on whether they assume accept rates stay the same or increase. (Or, many readers may fail to think about the issue). To me, that is totally unacceptable; we can't post half-baked documents that mislead the community.

    We need to provide guidance on how many papers we expect to accept, shepherd, revise, and reject. If we had explicit targets for accept and revise rates, and the expected success rate of revisions, we would be able to carry out an analysis to show the actual impact on review load and we would provide guidance to future PC chairs on the intent of the proposal to ensure consistency and continuity.

    The R2 proposal assumed that accept rates would remain unchanged from today's review system, as there was no clear evidence in the R2 survey results that the community wants accept rates to change.

    [3] Conceptual Problem: Too Few Resubmits for this to Affect Load

    A key, stated hypothesis of the "paper-passing" proposal is that resubmissions that can ultimately be accepted after revision account for a very large fraction of the review load. Based on my experience as the MICRO PC chair, I believe this hypothesis to be false – I believe resubmissions account for perhaps 20-25% of all submissions, and, of those, half of them should never be accepted to our top-tier venues and should be redirected (via reject outcomes) to a lower tier. If I am right, then this proposal will not have a significant effect on review load.

    Fortunately, this particular hypothesis is testable using historic data. If the PC chairs of 1-2 years of conferences made accepted and rejected paper and author lists available for analysis, we could figure this out. But, the analysis is a lot of work and has significant privacy and CoI implications.

    I have said this before, but I'll say it again: To improve reviewing load, we need to make it easier to handle the papers we *reject*, not the papers we accept.

    [4] Conceptual Problem: Proposal does not address what happens to a "Reject"

    Natalie identified this problem. The implications of a "Reject" are unclear: can the paper be resubmitted as new to the next venue? Will it get fresh reviews? Or is it banned forever from all 3 of our conferences? The latter is a terrible idea – reviewers make mistakes. If the paper can be immediately resubmitted as new, then the revision process again has very little impact on review burden.

    More generally, how to distinguish a new paper from a previous reject is tricky. Some kind of guidelines are needed.

    In any event, the problem for me is, again, that the proposal does not address the issue, which makes it impossible to analyze its implications. This question has to be addressed.

    [5] Conceptual Problem: 35% of the community do not want a revision process

    The R2 survey shows that 40% of respondents favor a revision process. But, 35% explicitly indicated they do not. Of this fraction, some may be unhappy with past implementations of revisions, but many may simply favor that their paper be reviewed as "new", because they have had the experience of getting their paper assigned to an unreasonable, inexpert, or inappropriate reviewer. It would be naive to assume that future review assignments will be perfect.

    In our existing model, a "bad" review assignment costs time, but it does not forever doom a paper. The "paper-sharing" proposal does not address what an author is expected to do if they think a review is unreasonable. A "Reject" outcome may simply lead to the status quo, depending on how "rejects" are handled. But, a "revise-reject" sequence costs *even more time* than the current system.

    My position is that review continuity should not be automatic – authors should not be locked in to bad reviewers by a reject outcome. If authors can opt-in or opt-out, we can satisfy both those that want continuity and those that do not. For reject decisions, I would be comfortable with giving the authors the *option* of re-review by the same reviewers, but would vote against any proposal that makes it mandatory/automatic. For "revise" decisions, I only support proposals where the success rate on revisions is expected to be high. If a paper is rejected after revision, then it should get a fresh set of reviewers if it is subsequently submitted again.

    [6] Conceptual Problem: PC independence vs. overriding Revise outcomes

    Natalie identified this problem as well. There is a fundamental incompatibility between PC independence and revise-accept outcomes passed between conferences. The proposal somehow tries to have it both ways – a revise-accept sequence comes from the original reviewers, but the subsequent PC can throw away the decision. I don't see how that is a fair process. On what basis does the next PC discard the decision, if they did not review the paper? If they do review the paper, what was the point of the revision process?

    But, the alternative, that the PC chair and new PC are bound by the previous reviews, is also problematic. What if the new PC chair thinks the originally assigned reviewers are not the best experts to review the paper? What if there are so many revise outcomes from the previous conference that it will consume the entire schedule of the new conference?

    Either way, things are problematic. If we do proceed with this proposal, I think it is better to be fair to the authors and bind the new PC to the decision of the previous reviewers. But, an even better way to solve these problems is not to create them in the first place; they all arise as a consequence of passing papers between conferences.

    [7] Conceptual problem: “Common journal” and “paper sharing” only possible with common oversight

    As proposed, the common journal will publish papers from HPCA, MICRO, and ISCA. As noted above, we expect this will be implemented via a new journal jointly published by ACM and IEEE CS. However, today, HPCA is sponsored only by IEEE CS. It is problematic for ACM to publish papers over which it has no editorial oversight. Conversely, IEEE CS presently owns sole copyright over all of HPCA’s content. Potential issues here could be resolved by moving HPCA to joint ACM/IEEE sponsorship like ISCA and MICRO. I am too young to know why the present situation arose to begin with, but resolving it and moving to joint sponsorship is likely a prerequisite to getting agreements and oversight in place for a joint journal.

    These sponsorship/copyright issues also pose a problem in passing papers between conferences. Today, the jointly sponsored conferences operate under memoranda of understanding (MOU) that grant copyright for the conference content to ACM or IEEE CS in alternating years. These MOU do not anticipate passing papers submitted for review to one venue to be published in another. They will need to be rewritten. Since copyright issues are intertwined with the ACM and IEEE business models, we should expect that it will take quite some time to get agreements in place.

    [8] Implementation Problem: At least 30 days for revision

    I will oppose any proposal that encourages or allows authors less than ~30 days to address revisions. That is, the notification date for a "revision" decision until the *next* opportunity to submit the revision should be at least ~30 days. I believe the intent of the authors of this proposal is to have a 30-day minimum. However, they have not demonstrated a conference calendar to show that this is feasible. I will not support a proposal until someone demonstrates a sample calendar to ensure we will be able to meet this minimum.

    [9] Implementation Problem: Visa and travel problems

    Some authors elect to submit papers to a conference based on the conference location. [My students were highly motivated to submit to MICRO in Hawaii.] Venue has very real implications for those who must obtain visas for travel. For example, Persian students studying in the U.S. cannot travel outside the U.S. If a large fraction of papers are passed from one venue to the next, we harm the careers of these students. For example, suppose HPCA is in the U.S., but the next ISCA and MICRO are overseas. A student who gets a revise outcome for an HPCA submission must now choose between withdrawing their paper or having someone else present their work. I recognize that this situation already arises with rejected papers, but the stakes are *drastically higher* for a "Revise" outcome, where the expectation that the paper will be accepted after revision is much higher. Very few authors would choose to withdraw and wait for another U.S.-based conference; most would proceed with revision and give up the opportunity to present. That a pretty severe career penalty for the affected students.

    [10] Implementation Problem: E.U. GDPR

    I'm sure you have all received as many GDPR opt-in emails this week as I have. Unfortunately, the GDPR imposes major barriers to the idea of passing papers between conferences.

    First, GDPR regulates the transfer of personal information (including names and email addresses) between entities. So, an E.U. university hosting a HotCRP server will need to abide these regulations to transfer papers, author information, or reviews to another HotCRP server. Moreover, the *source* of the transfer *remains liable* for the data protections of the *destination*. So, if I pass you my HotCRP data, my University could be on the hook if you suffer a data breach. This issue is likely solvable. For example, we could all use review software hosted by ACM. But it will require significant effort by ACM and IEEE to ensure our processes comply with the law. We need only one disgruntled author to start filing complaints and some organization could end up on the hook for enormous fines.

    Second, GDPR includes the "right to be forgotten". From what I have read, this rule pretty clearly applies to our review systems. So, if an author from the E.U. gets a bad set of reviews, they can invoke their "right to be forgotten" to require that any connection between their identity and a bad review be deleted, preventing the reviews from passing to the next conference. If our intent is to pass "reject" outcomes between conferences, this poses a pretty serious impediment. As far as I understand, this right cannot be waived via a click-through agreement.

    (We could avoid GDPR by disallowing access to our review systems from the E.U. I doubt anybody would support that approach…)

    Incidentally, these privacy laws are the reason the R2 committee decided against proposing any central mechanism to track reviewer performance and/or black-list low quality reviewers. The GDPR "right to be forgotten" is explicitly designed to prohibit such black-lists. The HCI community abandoned such a reviewer database over privacy concerns.

    [11] Implementation Problem: Managing Conflict of Interest

    At submission time, we know the PC of a conference, so authors can mark their conflicts. The PC chair has complete information (knows all authors and all reviewers) and can police the CoI process. There is a lot of evidence that the way we manage this today is pretty good.

    If we pass papers between conferences, things become much more complicated. We won't know the future PCs in advance, so authors can't easily mark them as conflicts. Once a PC chair passes a paper and/or reviews to the next PC chair, the first chair loses the ability to police CoI. A simple miscommunication could lead reviewer identity to be leaked. If the next PC has no role in the decision process, this problem goes away—the review process can be completed with the previous PC chair and no new conflicts arise. But, if the new PC has any editorial role (e.g., it can view the reviews from the previous PC), then we need to carefully manage conflict of interest for the entirety of all PCs involved.

    [12] Implementation Problem: Review system logistics

    This is a pretty dumb problem, but it is a problem nonetheless. HotCRP simply isn't set up to enable papers and reviews to be passed across conferences without enormous effort. My students had to go through an enormous manual effort to move papers from the ASPLOS HotCRP instance to the ASPLOS Shadow PC instance – and they made numerous mistakes. VLDB and SIGMOD abandoned their effort to share reviews due in part to this logistical challenge. We can solve this problem by paying Eddie Kohler to fix it for us, but it will take time and money. The implication is that it will take quite some time to roll out this proposal (I estimate more than 1 year).

    Endorsements

    [see Google Doc version]

    • July 14, 2018 at 5:28 am
      Permalink

      Addressing Tom’s comments on this “Community Proposal” that
      he feels the strongest [2][4][6][7][8]:

      >>[2]: fails to identify target revision and accept rates,

      Of course. The goal of this proposal is not to legislate “how many papers to accept,
      shepherd, revise, and reject”. The committee that may work on fleshing out this
      proposal (and I suspect individual PC chairs) will provide guidelines on these issues.

      The proposal openly says that a problem is the many resubmissions and that it
      wants to increase the number of good papers that are accepted. This implies that a
      large fraction of the papers need to receive a “revise”. And because “the authors
      have the opportunity to address reviewers comments”, many will result in accepts.

      My opinion: as a community, we will have to accept more papers (and also increase
      the acceptance rates), because the community is growing, and there is a growing number
      of authors that feel they are unable to publish.

      >> [4] Proposal does not address what happens to a “Reject”

      Great problem to get ideas from the community and to work on by a committee.
      It is not something that we want this proposal to be tied to.
      By the way: You already provided great feedback.

      >> [5] 35% of the community do not want a revision process

      The actual numbers from the R2 survey:

      Usefulness of revision-based review process
      ○ Positive (39% of authors, 40% of PC/ERC members)
      ○ Negative (27% of authors, 35% of PC/ERC members)

      You provide great feedback.

      >> [6] PC independence vs. overriding Revise outcomes

      The proposal already takes a position on this tension. It gives some leeway to the
      PC chair of a particular conference to accept/reject revised papers.

      But this is another point to get feedback from the community. It may be that many in
      the community feel strongly about this.

      >> [7] “paper sharing” only possible with common oversight

      Cleary, we cannot wait to get all the IEEE/ACM arrangements
      in writing before asking for feedback for this proposal.

      >> [8] Implementation Problem: At least 30 days for revision

      With only 3 deadlines to submit a paper per year (separated by 4 months), this proposal
      guarantees > 30 days for revision.

      Thank you

      Josep Torrellas

  • July 10, 2018 at 6:20 am
    Permalink

    A short note on this proposal:

    1) Some rejected papers are resubmitted to the next deadline as is (ignoring reviewers’ feedback). Others are substantially changed and can be very different. Having reviews follow papers in the former case is appropriate and in the latter case may not be. How do we differentiate?
    2) A major problem in our process is low quality reviews. We’re all familiar with the two-line “reject, no novelty here” review for a paper that ends up being incredibly influential. There is so much noise in our review process that outcomes are highly randomized. Allowing lousy reviews to follow papers will make our process worse, not better. Allowing people that write lousy (hostile, terse, and/or wrong) reviews to continue writing lousy reviews is also undesirable, but that is a (somewhat) independent issue.

    There are many other subfields in computer science and other fields that have dealt with the same problems we face. Are there any successful examples of the “reviews follow papers” proposal?

  • July 10, 2018 at 8:51 pm
    Permalink

    I think Doug Burger (almost) hit the nail in the head. I have two additional comments:

    * On point (1), in addition to figuring out how to deal with quick flip vs big revision, it is really important for program chairs to determine how to assign reviews. In quick flips, you may want new reviewers, in major revisions, you probably want a mix of new and old. I disagree with Tom that short revision periods should not be allowed, often one resubmits because of a especially vocal reviewer missed the point and the rebuttal was ignored/ineffective.

    * Point (2) is key. We can’t allow an inappropriate review to haunt a paper, that would be a disaster. So perhaps when allowing a paper to rollover, similarly to a rebuttal, authors can argue that a specific review should be disregarded? Or a paper discussion lead could also flag bad reviews?

    On the question of other communities doing similar things. VLDB and SIGMOD tried cross-conference rollover but abandoned, not sure if for similar reasons that Tom pointed out. http://www.vldb.org/pvldb/pvldb-faq.html

    It is great to see serious effort being put on improving our community’s processes!

    • July 14, 2018 at 5:42 am
      Permalink

      There is a lot of good feedback in these discussions.

      1) On bad reviews following a paper:

      Luis Ceze gave a great suggestion: when resubmitting a paper, the authors can argue
      that one or more of their past reviews should be discounted.

      2) On flagging bad reviews:

      Andreas Moshovos had given a great suggestion: have an ombudsman (perhaps part of
      the PC) to be alerted by authors when there is an unreasonable review. The
      ombudsman will be a neutral person who will determine when the reviewer does not want to accept
      the paper for non-technical reasons, and discard the review.

      Josep Torrellas

  • July 14, 2018 at 9:21 pm
    Permalink

    I do not know what the problems at large, but I do know what problems my group has facing with an abrupt increase in frequency in the past 2 years: Mirroring your experience, we have seen an increase of “sloppy” at best, sometimes self-contradicting, or worse reviews that misrepresent claims or make false statements (need to do X whereas X is already in the submission) to reject (and often in the first round where only 2 other people may check if they have the time these claims).

    I cannot see a perfect way to fix this, but as Josep communicated, one thing to try would be to have a third party, anonymous to all look at such cases after being alerted by the authors. There is no guarantee this is going to work, but I think it’s worth a try. Our current model provides no mechanism to raise such issues without risking annoying everyone involved.

    • July 14, 2018 at 9:23 pm
      Permalink

      And sorry to clarify, the above mechanism could be the way to request exclusion of a reviewer from the process. This way an unfortunate assignment of reviewers does not doom a paper to oblivion for ever.

  • July 16, 2018 at 3:18 pm
    Permalink

    I think Doug’s concerns are valid. Here are my thoughts on how we can address them.

    1) Differentiate between quick flip vs big revision:
    I think the only way this can be done is to ask the authors to identify the paper as a revision or a new submission. There should be some guidelines on what constitutes a “new” submission — e.g., X% change compared to any prior submission. This would be similar to guidelines provided on journal extensions. If the paper is a “big revision,” perhaps the authors should submit a “cover letter” (seen by the PC chair only and others as necessary) that describes the big revision and why it should be considered a new submission.

    2) Poor review quality:
    Given the large number of submissions, it is nearly impossible for the PC chair to track down poor reviews. I think one approach is to assign a “associate editor” (AE) from the program committee to each paper whose role is to read other reviews and then, generate a meta-review. This is done in some other communities (e.g., CHI conference) and provides some mechanism for quality control of the reviews. Based on my experience in submitting a few papers to CHI, this does not solve all problems with poor or harsh reviews but definitely eliminate the 2-line reject/no novelty reviews. This can be more work but the AE can help find external reviewers and thus, reduce actual review workloads while still properly representing external reviewers view at the PC meeting (which is another issue — external reviewer’s opinions are often overshadowed by PC members physically present at the PC meeting).

Comments are closed.