Stop me before I fake again | FI
-
In order to detect fraud one would have to ask and answer questions about every step of the process from data collection to coding to selection of cases to analysis. The process should be described in detail and "reproducible." The main challenge is that with original data collection it is hard to (re)produce similar conditions but, in general, the findings should hold up under relatively similar circumstances. If they do, we have more confidence. If they do not, we ask more questions. Isn't that how it is supposed to work?
-
This is overlooking one of the strengths of peer review. assuming that this paper would be send to 3 people in the same field, then someone would likely know "oh hey, this guy was working with Big Neoliberal" and then they may look into that organization or their own contacts to see about the data collection. Fake data is easy to make, but as long as we care enough to check people's sources the it isn't going to sink the discipline. Reviewers just need to have the gumption to call up the survey firm or ask the editors to request proof of original data collection from the authors.
In fact that sounds like a good idea for all original data collections, either be ready to offer some proof of data origin or risk arbitrary rejection. Proof doesn't need to be raw data , just a letter from an advisor or department chair that says "yeah this s**t it real"
-
"Check people's sources" is a good idea, but not practiced by any journal that I know of. Would NSF even check before funding you? If we want this done we can't leave it to peer reviewers; its gotta be a staff function. Or maybe the system is working. I don't know.
-
Is it crazy to think that an editor or reviewer could ask for a proof of data origin or at least if anyone could vouch for it? I don't think we need staff to do something that we should just be doing on our own (which is willing to show that we did the s**t we said we did)
-
Sherkat in the comments' section:
These days, at the word “experiment” I assume it is a fake, particularly if the findings are interesting.
I think this is one of the problems with these high profile fraudulent cases involving experiments. They could increase the skepticism of a certain generation of sociologists, who is already skeptical of experiments, towards new sources of data.
However, the issues raised by PNC do not only apply to experiments. If somebody wants to fake data, they can always do so with surveys or interviews they collected themselves.
Further, fraudulent behaviors could also affect the analysis of publicly-available nationally representative surveys like the GSS. Sherkat seems to think highly of these sources of data, but dedicated survey researchers could also engage in p-hacking, selection of cases, garden of forked paths, etc. In fact, these practices are probably a lot more common than outright data fabrication.
-
Expanding the ideological scope of disciplines so that they don't cheerlead through overextended or fabricated studies that confirm the biases of the field is one way to solve this problem. The best way to get people to check more closely is to give them an incentive to do so.
-
Wait a second, I bet you think that's the solution to every problem.
Is the solution to every problem in science, from your view, adding a bureaucratic mandate? I could call that ideological as well, but it wouldn't help either of us adjudicate which mechanism promotes less fraud.
-
I think my mechanism is pretty plausible, considering you've dedicated the better part of the last three years to attacking the minutiae of Regnerus' study. It seems pretty straightforward that people will work much harder to disconfirm ideas they are ideologically opposed to than those they're already inclined to believe.