• Changing RCF's index page, please click on "Forums" to access the forums.

Scientific thought. Definitely not social sciences pt 2.

Do Not Sell My Personal Information
The peer review process does not prove the validity of a study, else publication would be the end of the inquiry rather than just a step along the way. The peer review process probably can best be described -- if it is actually working the way it is supposed to -- as representing a "not unreasonable" standard, which is nevertheless miles from proving that it is actually true.

Q-Tip, I think you're talking about a few different things here, interchangeably, and none of which are things that I've asserted. I've not asserted that peer-review proves anything. Not validity, let alone the study itself (I've actually rejected the latter outright). The peer-view process does however speak to the quality of a study submitted for publication. Calling this a "not unreasonable" standard, is a pretty odd way of trying to nullify the intent of the process which is to provide quality controls demonstrating some degree of verification (not necessarily infallible verification) of the methodology of the work submitted.

No one is arguing that peer-review attempts to prove or disprove a study; in fact, that's the entire point of my last post to you about methodology, so I'm not sure why you're reiterating this as though it speaks against my argument?

But even that low bar is a best-case scenario. A core (and unavoidable from the perspective of the public) problem is the reliability of that peer-review process itself. That would include not only the competency of the particular peers whom approve publication, but also their biases.

How does one measure either the competency or bias of an unknown individual involved in the peer-review process?

Most of us probably remember reading about the pure gibberish papers that were submitted and peer-approved. The only way we found out about them was that the guy who did it fessed up.

Yes, that incident was well-reported. I'm not sure how that incident, however, entails the peer-review process being of little value?

And in the case of bias, it is entirely possible to have the particular peers who review a particular article have the same biases as the authors, and the process is therefore of not much value in terms of ferretting out biases or other errors.

This again, assumes that people are incapable of making rational judgements without being victims to their own biases.

I don't see why this assumption makes much sense? Particularly with respect to openly published scientific studies, as though there is complete homogeneity of thought within specific fields?

Nor does this make sense if we consider the fact that various scientists and academics, even if they agree with the conclusion therein, disagree with the methodology of the findings in a specific paper and have incentive for stating as such to increase their own profile as well as that of their institutions and programs?

It seems as though you're using this concept of "bias" as a catch-all to disregard the scientific process as invalid in totality due to the potential for bias wherever consensus might exist within science but not within politics? And not to speak to your motive, which I think is sincere, but the reasoning doesn't seem sound; hence my confusion.

I say that because you mentioned the case of Hyung-in Moon; but that incident doesn't actually speak to your argument with respect to bias - or even a real failure of the peer review process itself - but to the specific failures of specific editors at journals who were reprimanded or in one case even fired for their lack of scrutiny and an over-reliance on the software used to submit and review journals (which had a security exploit that both Moon and Chen used).

In fact, this case and Chen's case seem to speak directly against your argument since both Moon and Chen were caught, both admitted to what they'd done, and the only reason they were caught was because of the open-nature of the process.

If we go over the Moon situation a bit closer, it doesn't really line up with what you're describing. Moon's publications were not politically motivated or ideologically controversial in any way whatsoever, and it would be unreasonable to think that reviewers or editors would be biased in favor of his journal submissions. There were no ideologically biased reviewers in Moon's case, or in more wide-spread incident of Peter Chen's case which was similar but more complex. Instead, these individuals impersonated other academics, essentially stealing their identities, and posing as them to submit reviews using fake email addresses.

Moon and Chen were responsible for the vast vast majority of reviews submitted, and their personal friends were responsible for the minority of submissions. This security flaw was in the ScholarOne software and web portal used for submission, and the lax security of the various journals allowed the studies to be published at first. But the self-policing nature of peer-review, as well as the Moon's original journal editor, tipped enough people off to get both Moon and Chen's papers retracted.

So I think your assertions here are incorrect and based on false assumptions about how peer-review actually works, what it's intended purpose is, and that's evidenced by your examples of fraud that were "ferreted out" by the people involved in peer-review itself. I also don't find any reason to believe that there is a massive underlying bias that prevents rational judgement and critical analysis of scientific study -- at least, none that you've demonstrated here.

Hope this makes sense.

There is a massive gulf between the ability to prove or discover that a study is wrong, and proving that it is right.

Q-Tip, again, for the 5th time; no one is arguing that we should be proving a study, it's findings or conclusions are correct by evaluating it's methodology. Please re-read the paragraph you're quoting; the first sentence says nothing about the conclusions or findings of studies being right or wrong:

"In the example given on the last page, we do not need to replicate the study in order to invalidate the methodology as being unsound." -gourimoko


This is how one evaluates the methodology of a scientific study. You don't try necessarily falsify the study's findings or conclusions, but instead you look at the methodology used to derive those findings to test whether or not they're scientifically and logically sound, or if there are experimental or sampling flaws that are obvious to you but were not obvious or controlled by the authors. This happens routinely, more often than not; and it's why one goes through the process of peer-review in the first place.

The former may be within the capability of a sufficiently informed, diligent layperson, if the error, omission, or mistake in the study is apparent on its face. The latter is not, and that's really the point I'm bringing out here -- how do we know that a given study is true or accurate?

We're not trying to establish truth by evaluating a study. Again, no one is asserting that we should be trying to determine whether or not a particular study's conclusions or findings represent the truth, but that the methodology used within the study was sound and thus the study itself is reasonable valid.

And again, the inability of a layperson to prove that a particular scientific conclusion is false obviously does not make it true. It is entirely possible that the error either 1) requires a level of expertise that the layperson does not possess, or 2) the errors simply are not discernable on the face of the study. They may be buried somewhere in the data itself, or how the data was gathered, and very often also will depend on the honesty of the scientist/expert, or assistants, in question. Which, again, we cannot determine.

I will reiterate this again: no one is asserting that we should be trying to determine whether or not a particular study's conclusions or findings represent the truth, but that the methodology used within the study was sound and thus the study itself is reasonable valid.

From my last post: "In the example given on the last page, we do not need to replicate the study in order to invalidate the methodology as being unsound." -gourimoko

I would refer you back to my earlier post, post #72 in this thread; the top 4 paragraphs of that post cover exactly this question in detail.

No, I'm not saying we're incapable of understanding -- I'm saying that in 99.99% of the cases, we're incapable of validating/verifying the conclusions,

No one is suggesting you should be doing this... This isn't how peer-review works, and laymen or the public aren't likely to be doing this either.

though we may be capable of refuting them - or at least pointing out flaws that put those conclusions into doubt..

Which.. is what you should be trying to do. Pointing out flaws in the methodology to put the study's validity in doubt; not testing the specific conclusions themselves. That's the point! :chuckle:
 
I am talking not about bias in favor of a person, but rather bias in favor of certain conclusions/results.

I know what kinds of bias you're talking about.

In essence, an academic/scientific circle-jerk. And my point isn't that all such things are tainted by biases.

I'm glad you point this out because that's what it sounds like your point is, even though you're saying it isn't.

My point is that as outside laypeople, it is very difficult for us to know if that it going on.

Why is this difficult to know? I don't understand this part of your argument about laypeople if we're both not talking about people unable to understand the study itself (which you state that we aren't).

You agree we can look at flawed methodology (outside of conclusions, remember, we're not talking about running experiments ourselves); so, should it not stand to reason that public studies would have a great deal of scrutiny considering they're published works? Wouldn't people who disagree point out the flaws in these studies to support their own works?

Again, I don't understand your argument here.

Just as one kind of bias, I'd direct you to the article I linked upthread about the problem with medical studies.

K.. Will look into it.

The mere fact that some of what is termed "common-sense" may fit your argument does not mean that all of it does. And I really sort of detest definitional debates unless the definition itself is the debate (as it may be in the transgender context). So I'll simply clarify that in this context, when I refer to common-sense, I'm referring to logical conclusions drawn from personal observations/experiences.

So.. we're rejecting the term "common sense." Okay.

Relying on "logical conclusions" drawn from personal observations and personal experiences is anecdotal reasoning; and you also understand why/how anecdotal reasoning can be and is flawed for articulating concepts between two people? I don't have your personal observation or experience, and there's no way for you to communicate that experience to me as fact without trust, thus we're no longer operating within the scientific method even if we might be acting reasonably or even somewhat rationally.

It doesn't, unless you have a sound basis to trust my judgement on that particular subject.

Trust?

That would mean that conclusions based upon that relationship would depend on my trust of your judgement as a premise to any claim; which, is an unscientific proposition by definition.

I trust Einstein's judgement in matters of physics;
Einstein thinks xyz;
Thus, xyz.


That's obviously not rational.

But just because my personal experiences/observations may not be of value to you doesn't make them wrong or not useful to me.

No one is saying that you can't, as an individual, rely on your own personal observations. But to argue that what you see as an individual speaks to larger general contexts, without some form of general verifiable empirical evidence, well, that kind of thinking is often wrong and very short-sighted -- and yes, this is generally an appeal to common sense and anecdote which, as stated above, is irrational.

Depends on the nature of the conclusion, doesn't it?

To an extent.

If the conclusion is that "x" always leads to "y", or that "z" never happens, experiences that are consistent with those conclusions do not prove the validity of the rule itself. However, one example that contradicts that rule does, in fact, disprove it.

Sure.

Just as if someone says that x is more likely than y for group a compared to group b; if someone in group a argues it never happened for them, or happens less often, while it is happening for people in group b or is happening more often - their personal experience does not contradict the assertion. I think this example is closer to what we're discussing, rather than one that would assign absolutes like "never" or "always."

Ugh....You seem to be arguing that everything we see and observe is fatally tainted by our own subjectivity, and hence not reliable.

That's not what I'm arguing.

I think that argument ends up consuming not only itself, but also scientific inquiry in general.

Individual opinions without any form of empirical verification are not reliable as scientific evidence to others... i.e., to science as a community of people invested in finding truth. That's the point.

Common-sense/personal experience alone may not be sufficient to evaluate every argument, but there are certainly going to be times they are capable of evaluating some. Just as an easy example, if some study would have as its conclusion that "there are no physical differences between males and females", my personal experience alone is enough for me to know that the conclusion is bullshit. If someone publishes a paper saying that it is impossible to ride a bike without using your hands, and I have actually done that, then I know that study is invalid.

I'm glad you mentioned these two examples because I think they help to illustrate how we've gone off the rails here with respect to testing the scientific rigor and methodology of a study to determine validity as opposed to testing it's degree of being a true representation of reality.

Let's begin by going through both examples and we'll come back to this point.

With respect to the first example, you don't need to rely on common-sense or personal experience since anyone can observe the differences themselves without disagreement. You can and would rely on verifiable empirical evidence and observation in this instance since there's no need to do otherwise. Hence, the unlikelihood of such a study being considered since there is no basis upon which to form such an argument, right? I mean, how could such an argument be formed without wild assumptions being made in the first place right? Pretty unreasonable, no?

With the example of riding a bike without using your hands; again, if you could show others that you've done this, then the study itself would be proven false to the community. However, it's very important to note, and I cannot stress this enough as it speaks directly to this entire conversation -- you're demonstration of riding a bike without hands does not mean that the study in question is invalid.

That's why it now seems that there is some confusion here with respect to what I'm talking about and what you're talking about.

I'm talking about logical and scientific validity and soundness of studies, arguments, and conclusions; literally, the evaluation of methodology that goes into a study. You seem to be talking about exclusively whether or not that particular study makes true statements based, and again seemingly only upon testing the conclusions of those studies against some observation without evaluating the methodology.

You likely know this already, but the validity of an argument is based upon whether or not the conclusion is entailed within the premises. It has nothing to do with the truth of the statement.

The soundness of an argument tests not only the validity of the argument, but the truth of the premises required to draw the conclusion.

An argument can be completely false, and yet perfectly valid. But an argument cannot be both valid and sound and yet also false unless one of the underlying premises we think are true is in fact not.

This is why/how we evaluate arguments for validity and to make qualitative assessments of the soundness of the methodology therein; to such an extent that we have at our disposal. This is exactly what laymen should do, and what someone in the peer-review process would do.

I think this is where the breakdown in this conversation is, because we're talking about two completely different things with my argument focusing on methodology and your focusing on testing the truth of the conclusions.

If you want a real-world example, I recall reading studies that claimed there was no evidence that employers were cutting hours because of the ACA. Well, I know that's bullshit -- I was personally involved in some of those decisions and know for a fact that it did happen, and for exactly that reason.

Perfect example; see my remarks about the group a vs b analogy above.

In this case, did the studies claim they found no evidence, or that no such evidence existed anywhere in the United States?

If they claim they found no evidence, then your personal anecdotal experience does not invalidate the study, nor does it entail the study being unsound or false. The study might not be bullshit, but your understanding of their findings might be in error.

Well, I'm not actually talking about global warming, if that's what you mean. I am truly speaking far more generally than that.

I know, we're speaking in generalities, I get it.

My problem lies in your use of "we." It is not us, doing that, nor you (singular). It is a "them", which goes right back to the point of "why should we believe "them" in the first place"? Heck, at the most basic level, how do we know they haven't falsified or otherwise tainted their data?

Which gets back to a question you long-ago asked me, which was why should we the scientific process in general.. I didn't think you were serious when you first asked, but it's obvious you are. I think that's a different question that requires a much different response in a separate post considering I'm well-above the 20,000 character limit.
 
How does one measure either the competency or bias of an unknown individual involved in the peer-review process?

You can't. That's the entire point of why the peer-review process is pretty much meaningless for laypeople. For the actual experts in that field, and for the publisher of the journal in question, peer-review -- if it is working correctly -- serves as a gate-keeping function as to what is worthy of even being considered in the first place. But that's about it.

It's as though you're using this concept of "bias" as a catch-all to disregard the scientific process as invalid in totality...

I'm not talking about the scientific process at all. Scientists look at papers as just a step in the scientific process, to be built on, tested, accepted/rejected/modified, etc.. I have zero issue with that.

It's lay people who start quoting scientific/academic papers in an effort to prove some point, and it is that non-scientific use of academic papers that I am saying should be disregarded. @David.

Just to give an example, I've seen lots of lay people engage in internet arguments, and cite some academic article for support, pointing out that it has been "peer-reviewed" as if that is probative of the validity of its conclusions. But as you said above, the peer-review process doesn't actually suffice as proof of anything.
 
Last edited:
You can't. That's the entire point of why the peer-review process is pretty much meaningless for laypeople. For the actual experts in that field, and for the publisher of the journal in question, peer-review -- if it is working correctly -- serves as a gate-keeping function as to what is worthy of even being considered in the first place. But that's about it.



I'm not talking about the scientific process at all. Scientists look at papers as just a step in the scientific process, to be built on, tested, accepted/rejected/modified, etc.. I have zero issue with that.

It's lay people who start quoting scientific/academic papers in an effort to prove some point, and it is that non-scientific use of academic papers that I am saying should be disregarded. @David.

Just to give an example, I've seen lots of lay people engage in internet arguments, and cite some academic article for support, pointing out that it has been "peer-reviewed" as if that is probative of the validity of its conclusions. But as you said above, the peer-review process doesn't actually suffice as proof of anything.

The only solution is to be an expert in everything. Or to stay in one's lane.
 
study on boredom:

https://link.springer.com/article/10.1007/s10648-011-9182-7

much of it is low arousal, purposelessness, lack of mental stimulation, lack of challenge/complexity. consequence of repetition. lack of motivation.

hits those with adhd, high impulsive people.


really trying hard to figure out how to beat this one. im not sure theres much you can do.
 
Last edited:
Travel...

That's what I did.
Not personally into travel.. High openness to ideas, notsomuch to experiences.

Highly creative people apparently tend to be vulnerable to this. Novelty seeking, get bored quickly, committment issues (not related to relationships). I imagine you have a similar temperament. Congrats on molding your environment to make it work for you. May ultimately just be the answer.
 
Not personally into travel.. High openness to ideas, notsomuch to experiences.

Highly creative people apparently tend to be vulnerable to this. Novelty seeking, get bored quickly, committment issues (not related to relationships). I imagine you have a similar temperament. Congrats on molding your environment to make it work for you. May ultimately just be the answer.

100%...
 
Last edited:
Big 5 and relationships.

High openness low conscientiousness, you're Novelty seeking and you're gonna cheat. And get bored.

Agreeableness is the biggest predictor of a successful relationship.

High neuroticism has a strong inverse effect on relationships.

People tend to like people with similar big 5 ratings. A person with low agreeableness is going to think someone with high agreeableness is a push over and not worthy of respect for example. High neuroticism will resent those with low neuroticism. Highly extroverted and low extraversion will obviously not be a good match.
 
Anyone have any books/articles on the effects of population density?
 

Rubber Rim Job Podcast Video

Episode 3-15: "Cavs Survive and Advance"

Rubber Rim Job Podcast Spotify

Episode 3:15: Cavs Survive and Advance
Top