The concept of statistical significance, a line drawn at the level of less than 5% probability that the effects of a new approach could be due to chance alone and not the intervention itself, implies that there’s a point of demarcation where we considera result positive. In truth, however, science and medicine are far messier than that, and we see adoption of new tests and new treatments adopted in a pattern more reminiscent of deciding to upgrade your mobile phone and television, in which there are bleeding edge people who are eager to pursue the latest approach with the first hint of potential value (called “innovators” and “early adopters” in technology, and in medicine they may be considered as “cowboys”), a much larger pool of people who need more evidence and comfort in something becoming a new standard of care, and a minority of “laggards” (the people for whom a phone upgrade today is focusing on whether to replace their rotary phone yet).
In truth, there are several variables that affect how eagerly or reluctantly members of the health care community adopt a new test or treatment. Here are the top 5 factors as I see them:
1) Doctors fit along a spectrum of needing a lot of evidence to change practice or not. Some always seem ready to change treatment practice based on the results of the latest presentation of a few dozen patients they heard about at ASCO, some are moved primarily when national guidelines change, while others seem to want 3 different phase III trials to come out positive before they’re willing to change what they’ve been doing for years. But even if doctors have a greater or lesser predisposition to be purists or very liberal, you see variables in play in their own behavior from case to case.
2) The level of “unmet need” is an important factor. Some current evidence-based standards are so unloved that many docs need very little pretense to try the less well established approach. On the other hand, treatments that are considered “good enough” are very hard to displace, even when there is evidence that something new may be a little better. In practice, we see the desperation of acquired resistance in EGFR mutation-positive NSCLC lead to afatinib being used despite the pretty significant evidence showing it’s really just ineffective in this setting, while I think the fact that docs are happy with the median progression-free survival (PFS) of 10-12 months with various EGFR inhibitors for first line EGFR mutation-positive NSCLC makes it hard for the combination of Avastin (bevacizumab) with Tarceva (erlotinib) to drive change in practice, even with a 6 month improvement in PFS.
3) Magnitude of benefit affects our judgment of how much evidence is needed. Seeing nearly all patients with a ROS-1 rearrangement respond dramatically to XALKORI (crizotinib) led to most oncologists to be convinced it’s worth trying to get XALKORI for any patient with a ROS-1 rearrangement, even if the data set is less than 20 patients. Other treatments that improve survival by at most 6-8 weeks and have a response rate barely higher than the standard treatment may leave us struggling to decide whether that is a benefit worth declaring a new standard, even if the results were statistically significant in a trial of 1000 patients. This issue is essentially the distinction between “statistical significance” and “clinical significance”.
4) Cost can affect decisions about adoption of a new treatment in many ways. Doctors have the unenviable conflicting roles of both trying to advocate for their patients but also be resposible stewards of societal resources. We tend to see the most aggressively priced treatments face more resistance to adoption as not clearly being “worth it”. And like nearly all humans on the planet, doctors tend to favor an approach that is superior (or even just defensible) and provides a greater financial incentive.
5) Toxicity/side effects requires a judgment of the balance of these negative effects compared with the benefit in efficacy. This is largely in the eye of the beholder, and frankly it may be an issue of whether the challenging side effects are “worth it” for the patient as well as for the doctor who would be required undertake the side effects of managing them. It also gets used as a soft, subjective measure to justify resistance based on factors like cost of a treatment, when people don’t want to openly acknowledge that cost is a factor.
6) Treatments are subject to trendiness. Immunotherapy in cancer was greeted with incredible skepticism for decades, and oncologists tended to really minimize the strategy. Then, two years ago, immunotherapy became white hot, and all of a sudden oncologists (and the media, and patient communities) are treating immunotherapy as if it’s going to cure everyone of every cancer forever. Right now, many oncologists have a very low threshold to recommend immunotherapy because it’s the hot new thing, while boring old chemotherapy is last year’s model. I believe part of the appeal of trendiness is greater “hope” for the approaches seen as more novel.
7) Is everyone else also doing it? Peer pressure didn’t end in high school. Doctors often don’t want to be way out of line with the prevailing standards, so along with weighing the evidence for themselves, we’re prone to taking comfort in following the current best practices, which may mean waiting until new ideas catch on with the masses before pursuing them. There are far more followers than true leaders in this world.
8) Patient motivation, education, and “sympathy factors” are also something I see affect clinical decisions all the time. Sometimes a patient actively campaigns for a treatment, which may well lead to them getting it, whether it’s the right treatment or not. This may be surgery for someone with metastatic disease, or it may be fourth line chemo for advanced lung cancer. It’s certainly a factor when doctors are paid for delivering more therapy and doing more interventions, but it also stems from wanting to do more for a patient rather than acknowledge that there isn’t evidence to support them or often other approaches. Patients who are well educated about the risks and benefits of an intervention may make a compelling case that their consent is informed enough to justify treating somewhat beyond the current standards.
And like other people, many doctors are particularly sympathetic for a young cancer victim (we may introduce a patient as a 52 year-old mother with two small kids at home if we’re trying to plead a case for more aggressive treatment), for whom there may be a willingness to abandon all pretenses of evidence or even biological principles in the name of doing anything they can think of. I don’t mean to imply that this is a good thing or a bad thing, but the reality is that different standards of evidence are applied for different patients. ‘
In the end, when a doctor is deciding to adopt a new approach, it’s not that different from trying to decide whether it’s worth upgrading your phone — with apologies if that seems humbling. How convincing is it that the incremental value of changing is enough to justify overcoming the weight of inertia/current standards? To lead to change, something needs to be better, and not just very minimally better, but better enough to motivate change. That can be a combination of actual evidence of improvement and the hope and promise (perhaps even hype) of a new trend, and it may be offset/challenged by the cost, just as if you were weighing the factors of whether to buy the latest iPhone model.
I think that there needs to be better study of the factors of how doctors, and perhaps especially oncologists, approach decisions on treatment recommendations. It’s a complex question with many factors, as you can see, with no cut and dried rules for how something new penetrates and develops “market share” of the treatment approach for a given cancer care setting.