Say Goodbye to Clinical Trials That Don’t Teach

The idea that colchicine—a poison drawn from the seeds and stem of the lovely autumn crocus—could be used to prevent complications in patients undergoing heart surgery was an intoxicating one. Known since the Middle Ages, colchicine had been one of those ancient discoveries that had found new life in a host of scientific and medical applications during the modern era. Some six decades ago, it had been used in the study of human chromosomes: The chemical, which could stop mitosis in its tracks, made it easier to spy the dividing chromosomal strands in metaphase, where they could be clearly viewed under a light microscope. Then colchicine, which has unusual anti-inflammatory properties, proved effective as a treatment for gout, and eventually became a second-line therapy for pericarditis.

At the turn of the most recent millennium, when one large international clinical trial was recruited to test whether colchicine could prevent post-pericardiotomy syndrome—a common complication following cardiac surgery—something marvelous was discovered: colchicine seemed to prevent post-operative atrial fibrillation (AF) as well. In a follow-up trial, results of which were published in 2011, patients given the drug had nearly half the incidence of AF as those not receiving it.

But when the same lead investigators repeated the trial with 360 new heart surgery patients at 11 hospitals across Italy, testing the age-old poison against a placebo, “there were no significant differences between the colchicine and placebo groups” when it came to AF. Both studies were randomized and double-blinded. Both, naturally, crossed all their t’s and dotted all their i’s. And both came to entirely different conclusions.

So began a cycle that is all too familiar in the clinical trials arena today: a round-robin of studies that never quite concludes and rarely emerges with a clear victor. In 2016, when researchers gathered up data from 10 randomized controlled trials (aggregating the results from 1,981 patients in all), they found—“Colchicine therapy was not associated with a significantly lower risk of post-operative AF.” A year later, another so-called “meta-analysis”—this one rounding up six randomized controlled trials with 1,257 patients—found the opposite: “Colchicine significantly reduced the odds” of post-operative a-fib.

You can find the same types of long-running disagreements across the therapeutic board in clinical trials, as I wrote about years ago in an essay for the New York Times, entitled “Do Clinical Trials Work?” The striking example I focused on then was the cancer drug Avastin, developed by Genentech (now Roche)—which, at the time, had been studied in at least 400 completed human clinical trials for various cancers. In spite of all that testing—at a financial cost of untold billions of dollars and an immeasurable time cost for the patients who volunteered—no one was able to say with any certitude whether the drug would work in any one patient or not. As a Genentech spokesperson told me at the time: “Despite looking at hundreds of potential predictive biomarkers, we do not currently have a way to predict who is most likely to respond to Avastin and who is not.” (FYI, there are now 767 completed clinical interventional trials studying Avastin. And we still don’t know the answer.)