Within months of starting his undergraduate at Stanford University, 18-year-old Theo Baker was already on the trail of a story that would lead him to become the youngest George Polk award winner in American journalism history.
quite good at spotting simple techniques like omitting data or p-hacking
I don’t know about that. Spotting omitted data would only work if a key experiment is missing or if a reviewer suggests a control experiment that was actually done but not shown, or what do you mean?
And how to spot p-hacking? That would only work if you’d be able to see all underlying raw data. Otherwise especially in high impact journals the p-values are always excellent when they need to be.
Not to mention these peer review processes rely on unpaid labor from professionals who are heavily incentivized to use their time for basically anything else. They skim.
The replication crisis does not at all exclude highly regarded journals, unfortunately.
That would only work if you’d be able to see all underlying raw data.
A paper without the underlying raw data, is like a bicycle without wheels: you know it might’ve been useful at some point, but it isn’t anymore.
Very few papers publish both the raw data, and the analysis tools used on it, for everyone to verify their results.
The rest, are no different than a 4th grader writing down an answer, then when a teacher asks them to “show your work”, they come back with “no, trust me, my peers agree I’m right, you do your own work”.
It’s extra sad when you contact a researcher directly for the data, and you get any of “it got lost in the last lab move”, “I’m only giving it if you show me how you going to process it first”, or some clearly spotty data backfilled from the paper’s conclusions.
I don’t know about that. Spotting omitted data would only work if a key experiment is missing or if a reviewer suggests a control experiment that was actually done but not shown, or what do you mean?
And how to spot p-hacking? That would only work if you’d be able to see all underlying raw data. Otherwise especially in high impact journals the p-values are always excellent when they need to be.
Not to mention these peer review processes rely on unpaid labor from professionals who are heavily incentivized to use their time for basically anything else. They skim.
The replication crisis does not at all exclude highly regarded journals, unfortunately.
A paper without the underlying raw data, is like a bicycle without wheels: you know it might’ve been useful at some point, but it isn’t anymore.
Very few papers publish both the raw data, and the analysis tools used on it, for everyone to verify their results.
The rest, are no different than a 4th grader writing down an answer, then when a teacher asks them to “show your work”, they come back with “no, trust me, my peers agree I’m right, you do your own work”.
It’s extra sad when you contact a researcher directly for the data, and you get any of “it got lost in the last lab move”, “I’m only giving it if you show me how you going to process it first”, or some clearly spotty data backfilled from the paper’s conclusions.