“Seldom right, never in doubt”

Dr. Weeks’ Comment:    A.O. Smith, my 9th grade english teacher who opened for me the magnificent doors to Shakespeare* (by forcing us to memorize copious amounts of the Bard hence, learning him “by heart” would use a phrase to goad us towards excellence. He would refer to someone as  “Seldom right, but never in doubt”.  No one sought that damning label in class yet in college I was exposed to the description of the opposite characteristic:  “paralysis of analysis”.   Somewhere in between these two caricatures of the thinker and the man of action, lies an optimum stance.  Yet, in the application of science to clinical medicine (see article below), alas, Prof.  AO Smith would have a lot to ridicule.  

Today, the patient wants to be treated only after full and careful consideration of the “cloud of relevant data which surrounds him” (thanks for that very helpful concept, Dr. Lee Hood visionary founder of  the P4Medicine see www.p4mi.org) .

Therefore, today’s doctor must welcome personalized data from new and unfamiliar and even uncomfortable sources: 23 and me  for a road map of gene SNPs, Spectracell  for biochemical inventory, RCGG  for chemosensitiveity testing – the list of helpful resources goes on.

We are smarter now as doctors and the time is past  for treating patients statistically, when we can currently treat them as individual people!

(*PS – On another but fabulous note, don’t be the last to appreciate that great story and eternal lessons behind Edward de Veer, Earl of Oxford actually having written under the name of Shakespeare”)

 

 

Most Large Treatment Effects of Medical Interventions Come from Small Studies, Report Finds

“……Clinicians should be humble about the ability to prevent or treat most health problems, although a large range of effective interventions are available, some with large effects, most have modest (albeit important) effects and the effects of many are uncertain. 

…..Acknowledging uncertainty is the first, essential step to reducing important uncertainties through well-designed evaluations. 

…..Collaboration is essential to reduce those uncertainties by identifying and agreeing to priorities for evaluation and making sure not to continue to waste scarce resources for research on unimportant questions or poorly designed evaluations. 

…..Clinicians need to collaborate on needed evaluations and synthesizing and making the results of evaluations readily available to inform decisions about how best to improve the health of individuals and populations.”

 

ScienceDaily (Oct. 23, 2012) ”” In an examination of the characteristics of studies that yield large treatment effects from medical interventions, these studies were more likely to be smaller in size, often with limited evidence, and when additional trials were performed, the effect sizes became typically much smaller, according to a study in the October 24/31 issue of JAMA.

“Most effective interventions in health care confer modest, incremental benefits,” according to background information in the article. “Large effects are important to document reliably because in a relative scale they represent potentially the cases in which interventions can have the most impressive effect on health outcomes and because they are more likely to be adopted rapidly and with less evidence. Consequently, it is important to know whether, when observed, very large effects are reliable and in what sort of experimental outcomes they are commonly observed. … Some large treatment effects may represent entirely spurious observations. It is unknown how often studies with seemingly very large effects are repeated.”

Tiago V. Pereira, Ph.D., of the Health Technology Assessment Unit, German Hospital Oswaldo Cruz, Sao Paulo, Brazil, and colleagues conducted a study to evaluate the frequency and features of very large treatment effects of medical interventions that are first recorded in a clinical trial. For the study, the researchers used data from the Cochrane Database of Systematic Reviews (CDSR) and assessed the types of treatments and outcomes in trials with very large effects, examined how often large-effect trials were followed up by other trials on the same topic, and how these effects compared against the effects of the respective meta-analyses.

Among 3,545 available reviews, 3,082 contributed usable information on 85,002 forest plots (a graphical display designed to illustrate the relative strength of treatment effects in multiple studies). Overall, 8,239 forest plots (9.7 percent) had a nominally statistically significant very large effect in the first published trial, group A; 5,158 (6.1 percent) had a nominally statistically significant very large effect found only after the first published trial, group B; and 71,605 (84.2 percent) had no trials with significant very large effects, group C. The researchers found that nominally significant very large effects arose mostly from small trials with few events. For the index trials, the median [midpoint] number of events was only 18 in group A and 15 in the group B. The median number of events in the group C index trials was 14.

The authors also observed that 90 percent and 98 percent of the very large effects observed in first and subsequently published trials, respectively, became smaller in meta-analyses that included other trials; the median odds ratio decreased from approximately 12 to 4 for first trials, and from 10 to 2.5 for subsequent trials.

Topics with very large effects were less likely than other topics to address mortality. Across the whole CDSR, there was only 1 intervention with large beneficial effects on mortality and no major concerns about the quality of the evidence (for a trial on extracorporeal oxygenation for severe respiratory failure in newborns).

“… this empirical evaluation suggests that very large effect estimates are encountered commonly in single trials. Conversely, genuine very large effects with extensive support from substantial evidence appear to be rare in medicine and large benefits for mortality are almost entirely nonexistent. As additional evidence accumulates, caution may still be needed, especially if there is repetitive testing of accumulating trials. Patients, clinicians, investigators, regulators, and the industry should consider this in evaluating very large treatment effects when the evidence is still early and weak,” the researchers write.

Editorial: Improving the Health of Patients and Populations Requires Humility, Uncertainty, and Collaboration

Andrew D. Oxman, M.D., of the Norwegian Knowledge Centre for the Health Services, Oslo, Norway, comments on the findings of this study in an accompanying editorial.

“Clinicians should be humble about the ability to prevent or treat most health problems, although a large range of effective interventions are available, some with large effects, most have modest (albeit important) effects and the effects of many are uncertain. Acknowledging uncertainty is the first, essential step to reducing important uncertainties through well-designed evaluations. Collaboration is essential to reduce those uncertainties by identifying and agreeing to priorities for evaluation and making sure not to continue to waste scarce resources for research on unimportant questions or poorly designed evaluations. Clinicians need to collaborate on needed evaluations and synthesizing and making the results of evaluations readily available to inform decisions about how best to improve the health of individuals and populations.”


Journal References:

  1. Pereira TV, Horwitz RI, Ioannidis JA. Empirical Evaluation of Very Large Treatment Effects of Medical InterventionsJAMA, 2012; 308 (16): 1676-1684 DOI:10.1001/jama.2012.13444
  2. Oxman AD. Improving the Health of Patients and Populations Requires Humility, Uncertainty, and CollaborationJAMA, 2012; 308 (16): 1691-1692 DOI:10.1001/jama.2012.14477

Leave a Comment

Your email address will not be published. Required fields are marked *