Opinion: The peer review system has flaws. But it's still a barrier to bad science

Posted on September 26, 2017

 

Democracy and scientific peer review have something in common: it's a "system full of problems but the least worst we have". That's the view of Richard Smith, a medical doctor and former editor of the illustrious British Medical Journal.

Wiley, a large academic publishing house, says that:

Peer review is designed to assess the validity, quality and often the originality of articles for publication. Its ultimate purpose is to maintain the integrity of science by filtering out invalid or poor quality articles.

Another publishing house, Springer, describes peer reviewers as being "almost like intellectual gatekeepers to the journal as they provide an objective assessment of a paper and determine if it is useful enough to be published".

The peer review system has received a fair amount of negative press in recent years. It has been criticised largely because it is not particularly transparent and depends on a small number of peer reviews, an approach that can lend itself to cronyism. In addition it depends on trust: trust that reviewers will be fair and are willing to put sufficient time into a critical review. In this era of overworked academics being asked to do ever more, "sufficient time" is in short supply.

Despite these concerns, I agree with Smith: peer review is the "least worst" system available for assessing academic research and maintaining science's integrity. Having worked in academia for the past 30 years and currently serving as Vice President of the Academy of Science of South Africa, I believe peer review and the publication process is perhaps more important than ever in this era of "fake news" – and not just for scientists and academics. Thorough review and robust pre and post publication engagement by a scientist's peers are crucial if the average person in the street is to navigate a world full of pseudo-science.

Scientific truth is built on replication

One classic case of scientific fraud was the "Piltdown man" in 1912. Bone fragments supposed to be from an archaeological site in England were presented as a human ancestor. The alleged discovery of an early hominid in England was comfortable for British and European scientists at the time as it suggested that humans evolved in Europe. But this report was the source of controversy for many years.

While the Piltdown man has been recognised as a hoax since 1953, DNA evidence of the fact that the bones come from both an orangutan and probably two human specimens was only recently published.

This case illustrates both the strengths and weakness of the scientific publishing system. The hoax was possibly published because it fitted with the theories of the time. The report was, however, hugely controversial; was re-examined and with time was shown by scientists to be fraudulent.

This is a good starting point for understanding how real science works; how research is peer reviewed and critically examined before what is reported can be considered scientific fact.

Perfect science is never based on a single publication. Each publication is essentially a hypothesis: it will be read by other researchers, who will try to repeat or adapt what was done and then publish their own findings.

The peer review system is more complex than a reviewer just rejecting or accepting a manuscript. Quite often a reviewer suggests other experiments that authors have overlooked or different interpretations for some of the data. This means reviewers add significantly to improving the research and analysis that is performed.

There is no question that the reviews that I receive from higher impact factor journals are, on average, more critical and more useful. The impact factor is calculated "by dividing the number of current citations to articles published [in the journal] in the two previous years by the total number of articles published in the two previous years".

In fact in some cases a strong review will send me and my collaborators back to the laboratory and in so doing significantly strengthen our research. The amazing thing about this is that no fee is asked for these reviews. Yet scientists across the world do them willingly.

So scientific truth is based on a body of research which has been tried and tested by many researchers over time. You might ask, then, what value peer review offers – since, over time, an article that was found suitable for publication and further debate by peer reviewers may be debunked.

Why do we need peer review?

Peer review provides a filtering system. Studies that are not well conceived or performed will not be published. They will be filtered out either by a journal's editor or the reviewers. This means that what appears in the scientific literature is more likely to be of a higher quality. Readers of the peer reviewed literature know that it has been subjected to some level of critique. It is not merely the authors' opinion that what's being proposed in a particular article is the truth.

Editors and reviewers of peer review journals demand a particular style and level of experimental rigour. Results are substantiated with graphs, diagrams and in some cases photographs. Experiments are always repeated at least once and sometimes more often. Data is subjected to analysis and in some cases statistical methods are used to prove significance.

But how can the quality of a journal be measured in the first place?

A quick Google search throws up many hundreds of scientific journals. Many of these are likely to be predatory, charging authors publication fees without providing the sorts of publishing and editing services offered by legitimate journals.

An ordinary reader should find out which association, society or organisation publishes the journal. Alternatively, take a look at the editorial board.

Respected scientists do not link their names to journals they do not respect. Any respected scientist in a discipline knows which are the "good" journals – a decision they make by looking at the quality of the science in such publications.

Next time you read some interesting report or scientific news it's worth using the internet to check to see if the report is in fact supported by peer reviewed literature that meets these standards. At the very least do this before you share it on Facebook and add to the pseudo-science that already exists.

The best system for now

Until such time as there is a better system, peer review and the subsequent publication process with experimental repetition is the only source of substantiated evidence available. Similar to democracy we all need to understand its strengths and weaknesses.

 

Prof Brenda Wingfield is the Vice President of the Academy of Science of South Africa, DST-NRF SARChI chair in Fungal Genomics and Professor of Genetics at the University of Pretoria.

This article was originally published on The Conversation. Read the original article.

 

 

- Author Prof Brenda Wingfield

Copyright © University of Pretoria 2024. All rights reserved.

FAQ's Email Us Virtual Campus Share Cookie Preferences