Posters
Welcome to the AHS 2020 ePoster Session. Please scroll down to view all of the submitted posters or press Control-F to search. To view the poster and its abstract, click on the poster image. Many posters also have a brief audio introduction which can be played by going to the bottom of the poster screen.
P002: REPRODUCING STATISTICS PERFORMED IN RANDOMIZED CONTROLLED TRIALS INVOLVING HERNIAS
Naila H Dhanani, MD; Oscar A Olavarria, MD, MS; Karla Bernardi, MD, MS; Nicole B Lyons, BS; Cynthia Bell, MS, PhD; Julie L Holihan, MD, MS; Tien C Ko, MD; Mike K Liang, MD; McGovern Medical School at the University of Texas Health Science Center
Introduction: Clinicians rely on randomized controlled trials (RCT) to guide decision-making because RCTs are study designs at the lowest potential risk for bias. Given the careful and balanced study design, statistics performed are often easy to reproduce such as chi-square or t-test. Issues such as statistical discordance, or reporting statistical results that cannot be reproduced, should be uncommon. We hypothesized that less than 5% of hernia-related RCTs would report a discordant p-value that crosses the “p=0.05” threshold.
Methods: This was a pilot study. A total of 100 RCTs pertaining to hernias were identified in PubMed using the search terms “hernia” and “randomized controlled trial.” Studies were selected sequentially by date published, starting with the most recent. Studies were included if the primary outcome was categorical and could be reproduced using the data and statistical test reported in the manuscript. Discordance between the obtained p-value from our analysis and the published p-value was assessed. Our primary outcome was the number of studies that reported p-values that differed in statistical significance (crossed p-value=0.05) from the reproduction analysis. Binomial test was used to compare rates of discordance.
Results: Of the 100 trials analyzed, 14 reported p-values different from our reproduction analysis (p<0.001). Seven studies differed in statistical significance compared to ours (p=0.234). All 7 studies that crossed p=0.05 reported p-values <0.05 whereas the reproduction analysis demonstrated p>0.05.
Conclusion: Fourteen percent of the RCTs analyzed in this study reported p-values that our research team could not reproduce and 7% reported “statistically significant” results that were not reproducible. Although all RCTs, even “negative trials” provide important information and lessons learned, publication bias persists. Studies with positive results are published more often and in higher impact journals than those with neutral or negative findings. Future studies should seek to identify why some RCTs report discordant statistics and how to prevent this from happening.
Click the image below to expand:
