One problem is that we have constructed a system that rewards the publication of positive results and punishes negative results published or unpublished. Punishes as in makes or breaks the entire career of young researchers, if the negative result occurs when they are up for tenure. Rewards as in ensures research funding and professional advancement as long as positive results keep flowing out.
Another fundamental problem that peer review has a terrible time with is confirmation bias. Science in general has a serious problem with confirmation bias. If one ever embarks on a study where one seeks evidence for some causal linkage associated with some phenomenon in a general population where the phenomenon occurs, one can always find exemplars that support your hypothesis. Lacking actual work to replicate your results using sound methodology (e.g. double blinded and/or conducted using competent statistical analysis, something still as rare as hen's teeth in science in general because to it is difficult to do statistics correctly in a complex problem, not easy, and certainly not easy as in covered in one or two undergrad stats courses which is all that it is probable that the researcher has ever taken) confirmation bias can not only worm its way into the literature, it can come to dominate entire fields as a significant fraction of scientists who do the reviewing for both publication and grants are "descended" from one or two original researchers and their papers. It can take decades for this to be discovered and work out in the wash.
Peer review works better in some disciplines than others. Math it works well, because there is literally nothing up a publisher's sleeve -- fraudulent publication is indeed impossible and even mistaken publication is relatively rare and conditional on involving math so difficult even the reviewers have a hard time following it. Physics and the very hard sciences are also fortunate in that it works decently (although less perfectly), at least where there is competition and the proper critical/skeptical eye applied to results new and old. At least there a mix of laboratory replication and strong requirements of consistency usually keep one out of the worst trouble.
A simple rule of thumb is: The more a result relies on population studies, especially ones conducted with any kind of selection process or worse selection process plus the actual modification of the data according to some heuristic or correction process, where the study itself is conducted from the beginning to confirm some given hypothesis, the more likely it is that the result (when published) is bullshit that will eventually, possibly decades later, turn out to be completely wrong. If you have enough places for a thumb to be subtly placed on the scales and the owner of the thumb has any sort of vested or open interest in the outcome, it is even odds or better that a teensy bit of pressure will be applied, quite possibly without even the intention of the researcher. Confirmation bias is not necessarily "fraud" -- it is just bad science, science poorly done.
There is a move afoot to do something about this. We know that it happens. We know why it happens. We know a number of things that we can do to reduce the probability of it happening -- for example requiring the open publication of all data and methods contemporary with any paper produced from them, permitting absolutely anybody to look at them and see if t
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.