Interesting paper this month in JAMA about post-marketing studies of adverse drug effects. Randomized controlled trials are obviously the gold standard for the detection of common adverse events related to treatment. The problem is that, if an adverse event is rare, it is unlikely to be detected by a normal RCT. As a result, there has been a move recently towards conducting post-marketing studies of commonly used drugs to identify rare adverse effects. One such effect mentioned in the study is the association between bisphosphonate use and atypical femoral fractures.
The other commonly cited example recently was the association between PPI use and community acquired pneumonia that has been noted in multiple studies. The putative mechanism is that it is due to a reduction in gastric pH. The problem is the question of residual confounding - is there an alternative reason that these patients have more pneumonia. Are these patients simply sicker overall? Are PCPs who prescribe PPIs more likely to diagnose pneumonia? Just because there is a plausible mechanism doesn't make it true.
One potential solution is to perform a falsification analysis. Once you have determined the primary outcome of the study (in this case pneumonia) with a plausible outcome, you then perform a series of prespecified analyses with other non-plausible outcomes. If all of the outcomes are associated with the use of PPIs, it suggests that the association is more likely related to residual confounding rather than a real effect.
In the study referred to in the JAMA article, the authors, working from registry data, not only found an association between PPI use and pneumonia but also with osteoarthritis, urinary tract infections, rheumatoid arthritis, chest pain, DVTs and skin infections. Thus, they suggested that the association with pneumonia was more likely to be confounded because of the lack of a plausible relationship with these other adverse events. One criticism I would have is that I could think of perfectly reasonable hypotheses for why PPI use could be associated with OA and RA (use of NSAIDs) and chest pain (GERD). Another important point is that if this is not done properly (prespecified adverse events) you could find an association between the use of a drug an some adverse event if you tested enough and it could be used to wrongly refute the association between a drug and a problem.
Still, the whole article is a fascinating insight into the problems with post-marketing studies of drugs in the wider population.
The other commonly cited example recently was the association between PPI use and community acquired pneumonia that has been noted in multiple studies. The putative mechanism is that it is due to a reduction in gastric pH. The problem is the question of residual confounding - is there an alternative reason that these patients have more pneumonia. Are these patients simply sicker overall? Are PCPs who prescribe PPIs more likely to diagnose pneumonia? Just because there is a plausible mechanism doesn't make it true.
One potential solution is to perform a falsification analysis. Once you have determined the primary outcome of the study (in this case pneumonia) with a plausible outcome, you then perform a series of prespecified analyses with other non-plausible outcomes. If all of the outcomes are associated with the use of PPIs, it suggests that the association is more likely related to residual confounding rather than a real effect.
In the study referred to in the JAMA article, the authors, working from registry data, not only found an association between PPI use and pneumonia but also with osteoarthritis, urinary tract infections, rheumatoid arthritis, chest pain, DVTs and skin infections. Thus, they suggested that the association with pneumonia was more likely to be confounded because of the lack of a plausible relationship with these other adverse events. One criticism I would have is that I could think of perfectly reasonable hypotheses for why PPI use could be associated with OA and RA (use of NSAIDs) and chest pain (GERD). Another important point is that if this is not done properly (prespecified adverse events) you could find an association between the use of a drug an some adverse event if you tested enough and it could be used to wrongly refute the association between a drug and a problem.
Still, the whole article is a fascinating insight into the problems with post-marketing studies of drugs in the wider population.
No comments:
Post a Comment