Why RCTs? Recent study on stents is one example

This past week, we’ve been avidly watching reactions to a new study, published in The Lancet, about the efficacy of using stents to help patients with chest pain. The New York Times ran an article on the study; so did The Atlantic.

If you haven’t been following this (potential) bombshell of an RCT, the study found no value in using stents to combat heart pain. Why is this such big news? Partially because using stents for cardiac pain is big business. According to the study’s authors, more than 500,000 patients receive the procedure annually for chest discomfort.

It’s also big news because it goes against intuition, even the sort that medical laypeople possess. Without evidence to the contrary, it might seem logical that opening blocked arteries with a stent would reduce chest pain. No wonder doctors adopted the practice with vigor! Now there are data that don’t back up that perception. Even in medicine, a field long conditioned to accepting the validity of empirical research, studies will bump up against the fallacy of conventional wisdom.

That fact doesn’t surprise us at the A2J Lab. What did grab our attention is that the authors received permission to run the study at all. As we mentioned in a recent post, all RCTs in the U.S. need to receive institutional approval before human subjects can enroll in a study. Based on our experience, it would be fairly startling if this type of study, which flies so baldly in the face of “conventional wisdom,” were to receive approval in the United States. An ethical review committee could have responded that this evaluation would prevent some participants from receiving a “benefit,” namely the treatment they “need.” The deeper held the belief, the harder it is to accept or allow the introduction of contrary evidence. That’s why we need to test interventions rigorously, particularly when resources are scarce and lives are at stake.

One final note on the study’s design. Critiques from medical researchers have included that the study is flawed due to “Type II error.” In short, they contend that the sample size (in this case, about 200) is insufficient to rule out false negatives. The challenge of having sufficient sample size is an important component of any RCT. The Lab, for example, uses power analysis to maximize the chance that a study will have enough observations to detect an effect, should that effect really exist. But a study’s sample size isn’t the only factor that’s important in determining its validity; it’s also important to know how generalizable the results are, regardless of their statistical significance.

This is just one more example of why RCTs are important. Have you seen others recently? Share them with us in the comments or on social media.

Subscribe to Blog via Email

Enter your email address to subscribe to this blog and receive notifications of new posts by email.

Leave a Reply

Your email address will not be published. Required fields are marked *