gourimoko
Fighting the good fight!
- Joined
- Aug 13, 2008
- Messages
- 39,845
- Reaction score
- 53,645
- Points
- 148
The peer review process does not prove the validity of a study, else publication would be the end of the inquiry rather than just a step along the way. The peer review process probably can best be described -- if it is actually working the way it is supposed to -- as representing a "not unreasonable" standard, which is nevertheless miles from proving that it is actually true.
Q-Tip, I think you're talking about a few different things here, interchangeably, and none of which are things that I've asserted. I've not asserted that peer-review proves anything. Not validity, let alone the study itself (I've actually rejected the latter outright). The peer-view process does however speak to the quality of a study submitted for publication. Calling this a "not unreasonable" standard, is a pretty odd way of trying to nullify the intent of the process which is to provide quality controls demonstrating some degree of verification (not necessarily infallible verification) of the methodology of the work submitted.
No one is arguing that peer-review attempts to prove or disprove a study; in fact, that's the entire point of my last post to you about methodology, so I'm not sure why you're reiterating this as though it speaks against my argument?
But even that low bar is a best-case scenario. A core (and unavoidable from the perspective of the public) problem is the reliability of that peer-review process itself. That would include not only the competency of the particular peers whom approve publication, but also their biases.
How does one measure either the competency or bias of an unknown individual involved in the peer-review process?
Most of us probably remember reading about the pure gibberish papers that were submitted and peer-approved. The only way we found out about them was that the guy who did it fessed up.
Yes, that incident was well-reported. I'm not sure how that incident, however, entails the peer-review process being of little value?
And in the case of bias, it is entirely possible to have the particular peers who review a particular article have the same biases as the authors, and the process is therefore of not much value in terms of ferretting out biases or other errors.
This again, assumes that people are incapable of making rational judgements without being victims to their own biases.
I don't see why this assumption makes much sense? Particularly with respect to openly published scientific studies, as though there is complete homogeneity of thought within specific fields?
Nor does this make sense if we consider the fact that various scientists and academics, even if they agree with the conclusion therein, disagree with the methodology of the findings in a specific paper and have incentive for stating as such to increase their own profile as well as that of their institutions and programs?
It seems as though you're using this concept of "bias" as a catch-all to disregard the scientific process as invalid in totality due to the potential for bias wherever consensus might exist within science but not within politics? And not to speak to your motive, which I think is sincere, but the reasoning doesn't seem sound; hence my confusion.
I say that because you mentioned the case of Hyung-in Moon; but that incident doesn't actually speak to your argument with respect to bias - or even a real failure of the peer review process itself - but to the specific failures of specific editors at journals who were reprimanded or in one case even fired for their lack of scrutiny and an over-reliance on the software used to submit and review journals (which had a security exploit that both Moon and Chen used).
In fact, this case and Chen's case seem to speak directly against your argument since both Moon and Chen were caught, both admitted to what they'd done, and the only reason they were caught was because of the open-nature of the process.
If we go over the Moon situation a bit closer, it doesn't really line up with what you're describing. Moon's publications were not politically motivated or ideologically controversial in any way whatsoever, and it would be unreasonable to think that reviewers or editors would be biased in favor of his journal submissions. There were no ideologically biased reviewers in Moon's case, or in more wide-spread incident of Peter Chen's case which was similar but more complex. Instead, these individuals impersonated other academics, essentially stealing their identities, and posing as them to submit reviews using fake email addresses.
Moon and Chen were responsible for the vast vast majority of reviews submitted, and their personal friends were responsible for the minority of submissions. This security flaw was in the ScholarOne software and web portal used for submission, and the lax security of the various journals allowed the studies to be published at first. But the self-policing nature of peer-review, as well as the Moon's original journal editor, tipped enough people off to get both Moon and Chen's papers retracted.
So I think your assertions here are incorrect and based on false assumptions about how peer-review actually works, what it's intended purpose is, and that's evidenced by your examples of fraud that were "ferreted out" by the people involved in peer-review itself. I also don't find any reason to believe that there is a massive underlying bias that prevents rational judgement and critical analysis of scientific study -- at least, none that you've demonstrated here.
Hope this makes sense.
There is a massive gulf between the ability to prove or discover that a study is wrong, and proving that it is right.
Q-Tip, again, for the 5th time; no one is arguing that we should be proving a study, it's findings or conclusions are correct by evaluating it's methodology. Please re-read the paragraph you're quoting; the first sentence says nothing about the conclusions or findings of studies being right or wrong:
"In the example given on the last page, we do not need to replicate the study in order to invalidate the methodology as being unsound." -gourimoko
This is how one evaluates the methodology of a scientific study. You don't try necessarily falsify the study's findings or conclusions, but instead you look at the methodology used to derive those findings to test whether or not they're scientifically and logically sound, or if there are experimental or sampling flaws that are obvious to you but were not obvious or controlled by the authors. This happens routinely, more often than not; and it's why one goes through the process of peer-review in the first place.
The former may be within the capability of a sufficiently informed, diligent layperson, if the error, omission, or mistake in the study is apparent on its face. The latter is not, and that's really the point I'm bringing out here -- how do we know that a given study is true or accurate?
We're not trying to establish truth by evaluating a study. Again, no one is asserting that we should be trying to determine whether or not a particular study's conclusions or findings represent the truth, but that the methodology used within the study was sound and thus the study itself is reasonable valid.
And again, the inability of a layperson to prove that a particular scientific conclusion is false obviously does not make it true. It is entirely possible that the error either 1) requires a level of expertise that the layperson does not possess, or 2) the errors simply are not discernable on the face of the study. They may be buried somewhere in the data itself, or how the data was gathered, and very often also will depend on the honesty of the scientist/expert, or assistants, in question. Which, again, we cannot determine.
I will reiterate this again: no one is asserting that we should be trying to determine whether or not a particular study's conclusions or findings represent the truth, but that the methodology used within the study was sound and thus the study itself is reasonable valid.
From my last post: "In the example given on the last page, we do not need to replicate the study in order to invalidate the methodology as being unsound." -gourimoko
I would refer you back to my earlier post, post #72 in this thread; the top 4 paragraphs of that post cover exactly this question in detail.
No, I'm not saying we're incapable of understanding -- I'm saying that in 99.99% of the cases, we're incapable of validating/verifying the conclusions,
No one is suggesting you should be doing this... This isn't how peer-review works, and laymen or the public aren't likely to be doing this either.
though we may be capable of refuting them - or at least pointing out flaws that put those conclusions into doubt..
Which.. is what you should be trying to do. Pointing out flaws in the methodology to put the study's validity in doubt; not testing the specific conclusions themselves. That's the point!