There is a really interesting piece in the New Yorker (Dec 13, 2010). Worth tracking down a copy at your neighbors or dentist's office since the free online version is limited to the abstract. Jonah Lerher describes in The Truth Wears Off, that initial studies often report large benefits from treatments or large associations between a disease and a specific risk factor which then can't be validated in future studies.
There are many potential reasons for this 'decline effect' including publication bias - only publishing positive findings, especially in major or high-impact journals. Dan had a nice post discussing positive outcome bias a couple weeks ago. Another issue might be selective reporting of results by investigators desperate to find strong associations so that they can get published and then get re-funded. Certainly regression to the mean is important - ye olde bell-shaped curve. One thing they don't mention is confirmation bias, which I think drives both NIH funding and publication decisions and could be responsible for some of the reduced effect sizes seen in later vs. earlier publications.
This 'decline effect' is troubling given what it says about the scientific process. One wonders if changes in how science is funded and reported could impact this?
Tidak ada komentar:
Posting Komentar