As noted previously, science is an ongoing or cumulative process that builds upon prior results or findings. Consensus develops as results are continuously confirmed with evidence, and alternative ideas are proven to be inaccurate. An idea, called a hypothesis, becomes well established when it is accepted by a majority of scientists who are respected in that field. Scientific consensus—that is, accepting established results—forms around what scientists judge as the most probable hypothesis. The ideas on which there is consensus form the starting point for additional research. Sometimes this research expands and solidifies the points of consensus and other times, as was the case with Albert Einstein’s work, new work overturns the fundamental idea upon which the scientific community had reached consensus.
Building uniform scientific consensus takes time— years, decades and even centuries. Scientists have to observe, investigate, gather data, examine other research and repeat their experiments. They have to consider alternative explanations and debate their research and conclusions. They have to explain their data and methods. They also have to publish their results, so that others can examine their data, methods and conclusions.
How scientists report their findings is critical to the process of building consensus. The gold standard for scientists is “peer-reviewed journals.” To be published in a peer-reviewed journal, a piece of scientific work is reviewed by other knowledgeable scientists who are neither personal associates nor collaborators with the scientist submitting the work. The reviewers critique the work—that is, analyze and evaluate the methods, reasoning and results—and make a recommendation that the work be accepted for publication, returned to the author for revision, or rejected outright. Most journals enlist the voluntary help of other scientists in the field to judge the merit of the science being submitted for publication. The journals themselves choose the reviewers, and typically, reviewers are anonymous. If you are an active scientist who publishes, then part of the expectation is that you will spend some of your time as a peer reviewer. The reviewers add their own credibility to the work and increase the respectability of the journal. The peer-review process works to minimize the effects of human bias and error in the scientific process.
Just because a paper has been peer reviewed does not mean it is correct beyond all doubt. Sometimes the problem may be the personal bias of the reviewers—although having a number of different reviewers can help with this. Sometimes the problem may be the science, which initially looked solid but which over time turns out to be flawed.
This is what happened with a widely cited paper by Andrew Wakefield, a British researcher, who in 1998 published a paper in The Lancet, a highly respected journal, suggesting a link between autism and the measles-mumps-rubella (MMR) vaccine typically given to children between the ages of 12 and 18 months of age. Wakefield based his concerns on a study he and other researchers had conducted of 12 children. The paper and a press conference held by Wakefield made headlines around the world and may have led to an uptick in childhood diseases once nearly wiped out as some parents feared having their children vaccinated.
In 2010, however, The Lancet retracted the Wakefield paper amid charges of conflict of interest and ethical misconduct by Wakefield and also questions about his research since no other labs had been able to replicate Wakefield’s results. Wakefield’s medical license also was revoked.
Unfortunately, peer review is not a good way to catch scientists who deliberately fabricate or manipulate data. Reviewers can only assess whether a scientist used reasonable techniques and whether their conclusions logically follow from the data reported. Detecting fraudulent data can only be done by other researchers carefully replicating the work of others. This is why the reporting of research methods and raw data are an important part of the scientific process.
While “peer review” is the gold standard, there are other avenues open to scientists wishing to report their work. They can publish in journals that are not peer-reviewed, self-publish reports, and more recently, publish on websites and blogs. However, the absence of peer review should raise serious concerns about the work and the conclusions being reported. You should check the author’s or authors’ background to find out why the report was not published in a standard peer-reviewed journal and look for similar work by others that is published in a peer-reviewed journal. As science journalist Maggie Koerth-Baker writes, peer review is “a first line of defense. It forces scientists to have some evidence to back up their claims, and it is likely to catch the most egregious biases and flaws…Journals that don’t have peer review do tend to be ones with an obvious agenda” (boingboing).
Checking a scientist’s background and credentials also is critical in determining expertise. Scientists may have broad interests, but expertise only in a very narrow field. Koerth-Baker notes how some journalists pursuing the Fukushima nuclear accident asked nuclear engineers—experts on nuclear power plants—to comment on the human health consequences of the disaster. Just because a scientist has expertise on one area does not mean he/she can speak with authority in other areas even those that seem closely related.