This opinion was written in response to “Opinion: Peer-reviewed study affects response to gender deviation”.
In January, Advances in science published an extensive project analyzing the peer-reviewed results of 350,000 manuscripts from 145 journals for which no evidence of gender bias was found after submission of the manuscript. Just a month earlier, my colleagues and I published in mBio A similar, albeit smaller, study that analyzed the peer-reviewed results of 108,000 manuscript submissions in 13 journals of the American Society for Microbiology (ASM). Our study found a consistent trend for manuscripts submitted by corresponding female authors to have more negative results than those submitted by men. Both projects analyzed submission data for six years, which are only available to magazine publishers, but came to different results.
E.It is difficult to promote possible causes of gender inequalities in peer review and editorial processes in journals for a number of reasons. There are serious barriers to solid, cross-journal experimental study testing causal hypotheses by manipulating information and contexts from manuscripts, authors, and referees. Carrying out retrospective studies is therefore the only option, and it is far from easy as there is no infrastructure for data exchange between publishers and journals. Although this makes generalizing the results problematic and limited, I believe it is imperative to examine peer reviews with extensive, cross-journal data to avoid overemphasizing individual cases. We have been trying to do this lately Advances in science Items.
While we knew our results could be controversial, I am surprised at how Ada Hagan misinterpreted our research and would like to comment on the three points she based her opinion on.
The lack of randomization in journal selection is a weakness of our study, but we never claimed to have had a randomized sampling strategy. In addition, the size, distribution, and quality of our data set are unparalleled, and the use of different statistical models on the same data set has increased the accuracy and robustness of our analysis. Previous research of this kind has only been conducted on individual journals or a small cohort of similar journals, and never on such a cross-domain scale. It’s also important to note that the decision to limit our sample to only Web of Science-indexed journals was to ensure comparability and to avoid adding further controls to our models (e.g. to check for possible differences in the editorial and peer review standards). This meant that we excluded a small number of journals, but that this did not affect the distribution of journals by research area. Therefore, our data set contained a broad representation of scientific journals. I would also like to point out that small studies which claimed to have found clear traces of gender inequalities were never accused of relying on a non-randomized, representative sample of journals, although they were clearly limited to just one or a few journals examined .
I agree with Hagan that we have not been able to reconstruct the fate of rejected manuscripts that were later resubmitted elsewhere in order to gauge whether women were delayed in the publication process by multiple rejections. However, this would require a dataset covering thousands of magazines from multiple publishers – something that cannot be achieved. We checked the round of screening – that is, whether reviewers and editors were more demanding on women’s manuscripts – and found no significant negative effects. Additionally, I believe that choosing to start from individual manuscripts – rather than aggregated gender groups, as many previous studies, including Hagan’s – enabled us to look for confounders that we had data on, and at the same time the effect of the authors to assess gender in all steps of the peer review process.
That’s a good point. We only had desk rejection data in a sub-sample of journals as some manuscript submission systems recorded this information while others did not, and we reported this in the preprint version of the paper published in February 2020.The results suggest that manuscripts With a higher proportion of women among the authors, they were less likely to be rejected from their desks in health / medical and social science journals, while they were more likely to be rejected from their desks in physics journals. We originally included this analysis in the manuscript, but the reviewers suggested removing it as the focus of our study was on peer review.
In conclusion, we did not claim to examine all the causes of inequalities and biases affecting women in academia, and we have contextualized our findings in the conclusions to avoid misinterpretation. Our aim was to find traces of bias against women in the way peer review treats manuscripts submitted to a sample of journals from different research areas. In the end, we found no evidence of such bias. In my opinion, we should continue to do our best to publish rigorous studies – even if they are controversial. Ultimately, as scientists, we believe in the power of evidence.
Flaminio Squazzoni is Professor of Sociology at the University of Milan, Italy, where he has the BEHAVE Lab. From 2014 to 2018 he headed a major EU project on the subject of peer review (PEERE).