The NIPS experiment is making waves. If you are unaware, for the last NIPS conference, the PC was broken into two independent halves A and B. A random selection of the submissions were assigned to both committees. The result: 57% of the papers that were accepted by committee A were rejected by committee B (and vice versa).
This is terrible for many reasons. Some reasons that I have heard are the careerist issues (our jobs and promotion depend on accepted papers at top conferences) and the negative impacts on the Rate of Progression of Science. I’d like to discuss two more reasons:
The random rejection model makes needless work
If the average number of times a paper is submitted to a conference before it is accepted is 2-3, then we as a community are doing 2-3 times more work when it comes to writing & publishing: formatting papers to the will of the conference, reviewing papers, serving on PCs, reassuring students, consoling ourselves over beer. Should we be spending our limited time & resources on this? I do understand that papers can improve between different submissions, but with higher quality, constructive reviews, resubmitting once would be much less a burden. And perhaps if people felt like the system wasn’t so random, we wouldn’t try rolling the dice so early and often. And perhaps we would have time to do a more thorough job as reviewers.
The random rejection model likely negatively impacts underrepresented groups more
“Just resubmit your papers.” I worry that this non-solution disproportionately and negatively impacts those in
underrepresented groups. It is known that those in underrepresented groups tend to suffer from more impostor syndrome; and it is known that those suffering from impostor syndrome tend to take rejections on face value (our work isn’t good enough) whereas those in dominant groups tend to blame the rejectors (they don’t know good work when they see it). We also (should) know that small things can have big effects. One freshly minted professor emailed me:
I have personally experienced this during graduate school and I’m sure I and my students will experience this in future. A second or third year student puts in about one year worth of work with the hope that he/she will have his/her first top-tier (FOCS or ICML) conference paper soon. The rejections and bad reviews can essentially kill the confidence of that student. To some extent, this can also happen to the junior faculty.
One colleague worried about students dropping out of science altogether as a result of this. On a personal note, I have definitely changed my publishing behavior to favor journals where, although the time lag can be great, comes with a discussion between author and reviewer via the editor. I have only had one ‘bad’ experience with trying to get something published in a journal. I would say that I’ve had a ‘bad’ experience with at least half of my conference submissions. I have taken to rolling the dice once, if at all.
Add this together with our lack of double-blind reviews in TCS, we may be doubly hitting underrepresented populations, whose work is more likely to be dismissed by a dominant-group reviewer.
We should fix our conference system. Or just trash it altogether. I’d like to point out that the latter option would be better for the planet.
Another issue with the “just resubmit your papers” response is that, I have seen on PCs that being rejected at another conference is seen as a demerit, making it less likely to get in the place it is currently under review. This is particularly harmful for borderline papers – the ones at the most mercy of randomness.
@Jeff Phillips
How does the PC know that a paper was rejected at a previous conference?
With 3-4 subreviews/reviews on each paper and the goal of trying to get experts to review the paper, there is a pretty good chance that someone who had reviewed the paper before would do so again. Also there is often someone who is on the current committee who was also on the previous committee. Or someone on the PC that knows the person who submitted the paper.
Pingback: the NIPS peer review experiment and more cyber zeitgeist | Turing Machine
it’s a little disturbing that some of the reactions are “la la la, we knew this already, yay subjectivity !”