Abstract
In studies of diagnostic test accuracy, authors sometimes report results only for a range of cutoff points around data-driven "optimal" cutoffs. We assessed selective cutoff reporting in studies of the diagnostic accuracy of the Patient Health Questionnaire-9 (PHQ-9) depression screening tool. We compared conventional meta-analysis of published results only with individual-patient-data meta-analysis of results derived from all cutoff points, using data from 13 of 16 studies published during 2004–2009 that were included in a published conventional meta-analysis. For the "standard" PHQ-9 cutoff of 10, accuracy results had been published by 11 of the studies. For all other relevant cutoffs, 3–6 studies published accuracy results. For all cutoffs examined, specificity estimates in conventional and individual-patient-data meta-analyses were within 1% of each other. Sensitivity estimates were similar for the cutoff of 10 but differed by 5%–15% for other cutoffs. In samples where the PHQ-9 was poorly sensitive at the standard cutoff, authors tended to report results for lower cutoffs that yielded optimal results. When the PHQ-9 was highly sensitive, authors more often reported results for higher cutoffs. Consequently, in the conventional meta-analysis, sensitivity increased as cutoff severity increased across part of the cutoff range—an impossibility if all data are analyzed. In sum, selective reporting by primary study authors of only results from cutoffs that perform well in their study can bias accuracy estimates in meta-analyses of published results.http://ift.tt/2ripjAf
Δεν υπάρχουν σχόλια:
Δημοσίευση σχολίου