For severe or refractory rejections, thymoglobulin/ATG is commonly employed in combination with steroids and B-cell targeted therapies when indicated. However, alemtuzumab could be more effective then thymoglobulin since it depletes not only T cells but also memory B cells, NK cells and some DCs (Figure demonstrating expression of CD52 on different immune cells; Rao et al. PLOS One 2012).
Despite the absence of randomized trials, one small study reported the use of alemtuzumab for steroid-resistant cellular rejection (van den Hoogen et al. AJT 2013). They compared adult patients with steroid-resistant renal allograft rejection that were treated with either alemtuzumab (15-30 mg s.c. on 2 subsequent days; n = 11) or rabbit ATG (2.5-4.0 mg/kg bodyweight i.v. for 10-14 days; n = 20). Although not statistically significant, 27% of alemtuzumab-treated patients experienced treatment failure compared to 40% in ATG group (p = 0.70). Incidence of infection was similar, while ATG had more significant infusion-related reactions.
Few other case reports of success in treating refractory/severe rejections in kidney transplants (case report 1 and 2) and lung transplants (n=22, Reams et al. AJT 2007) are referenced here.
One of the advantages of alemtuzumab is that it can be given subcutaneously with only one or two doses and it is cheaper than ATG. We had few patients with plasma-cell rich rejections or refractory cellular rejections, in which we stained for CD52 and it was diffusely positive. We decided to administer Alemtuzumab. Due to the heterogeneity of the group, it is difficult to conclude at this time how effective it was though few patients had great responses.
In sum, alemtuzumab should be considered as an alternative therapy for severe/refractory rejection. Whether staining CD52 on biopsies would help guide us in selecting alemtuzumab in place of thymoglobulin still requires further investigation.
Tuesday, September 29, 2015
Infections and intravenous iron
An interesting study appeared in my email this morning examining the association between the use of iv iron and outcomes in hemodialysis patients with systemic infections. Current guidelines from all of the major societies suggest that if patients are admitted with bacterial infections in particular, iv iron should be stopped. There are both basic and clinical data supporting this recommendation although it should be said that the evidence is weaker than one would imagine.
The biologic plausibility argument rests on the fact that high iron concentrations have two negative effects in the context of infection. First, iron appears to impair the function of both neutrophils and T-cells. Patients on dialysis with iron overload have been shown to have reduced neutrophil function and phagocytosis. Similarly, impaired PMN function has been seen when neutrophils from healthy controls are incubated with ferric compounds. However, it should be pointed out that iron is vital for normal neutrophil function and individuals with iron deficiency also have impaired function. Clearly there is a sweet spot for neutrophil function but we are uncertain what it might be.
The evidence for a clinical effect for iron infusions on increasing infection frequency and severity is based on two separate strands (well summarized here). First, multiple observational studies have shown that high ferritin levels (particularly when >1000) are associated with an increased frequency of bacterial infections. There are clear limitations here. First, this finding is not consistent across all studies (4 of 14 studies did not find this association). Second, ferritin is an acute phase reactant itself and is not necessarily always elevated due to iron overload. Third, most of these studies were done in the pre-ESA era where much of this iron overload may have been due to transfusions. Finally, all of these studies were observational. As such, it is impossible to say if the risk of infection may not have been due to other factors which also lead to a higher iron requirement/ferritin levels.
The second strand of evidence is observational studies showing that higher doses or frequency of iron infusions are associated with a higher risk of future infection. Again, all of these studies are observational and so the same criticism as above applies. Similarly, there is a lack of consistent results across all studies although it should be said that the largest studies with the most patients did find an effect. However, in the most recent large study of ~120,000 medicare patients, while there was an increase in the risk of infection with higher doses of iron, the HR for infection for high vs. low dose was only 1.05. None of these studies examined the effect of ongoing iron administration on the severity of infection in dialysis patients. One could say that they are an argument against using iron in dialysis patients (which is not realistic) rather than arguments for stopping during an infection.
Today's email is thus especially welcome. This is a study published in CJASN which examines the relationship between ongoing iron use and severity of infection in Medicare patients. In total, 22,820 individual who had received iv iron in the 14 days prior to admission with a bacterial infection were included. 10% of these patients also received the iron while they were inpatients or shortly afterwards and were included as the cases. The controls were individuals who did not receive iron (i.e the iron was stopped per guidelines). The receipt of iv iron was not associated with increased short or long term mortality, LOS or likelihood of readmission in the next 30 days.
What is the explanation for this finding? One important question is why those 10% of patients did not stop iv iron after admission despite the clear consensus suggesting that it should be done? Looking at the table of demographic and clinical characteristics, there was nothing sticking out that suggested a reason for this difference and there may have been unmeasured characteristics that explained the difference. The outcomes were hard (as they would have to be given the source of the data) but may miss some subtle differences in outcomes related to the infection itself. That said, mortality and readmission are probably the most important outcomes. Finally, again this study was observational and needs to be interpreted as such.
All that said, this is an excellent addition to the literature which raises many more questions that it answers. These are important questions - too much of what we do in nephrology is based on consensus rather than clinical trials. Perhaps this might stimulate someone to do the necessary trial to answer this question once and for all.
Gearoid McMahon
The biologic plausibility argument rests on the fact that high iron concentrations have two negative effects in the context of infection. First, iron appears to impair the function of both neutrophils and T-cells. Patients on dialysis with iron overload have been shown to have reduced neutrophil function and phagocytosis. Similarly, impaired PMN function has been seen when neutrophils from healthy controls are incubated with ferric compounds. However, it should be pointed out that iron is vital for normal neutrophil function and individuals with iron deficiency also have impaired function. Clearly there is a sweet spot for neutrophil function but we are uncertain what it might be.
The evidence for a clinical effect for iron infusions on increasing infection frequency and severity is based on two separate strands (well summarized here). First, multiple observational studies have shown that high ferritin levels (particularly when >1000) are associated with an increased frequency of bacterial infections. There are clear limitations here. First, this finding is not consistent across all studies (4 of 14 studies did not find this association). Second, ferritin is an acute phase reactant itself and is not necessarily always elevated due to iron overload. Third, most of these studies were done in the pre-ESA era where much of this iron overload may have been due to transfusions. Finally, all of these studies were observational. As such, it is impossible to say if the risk of infection may not have been due to other factors which also lead to a higher iron requirement/ferritin levels.
The second strand of evidence is observational studies showing that higher doses or frequency of iron infusions are associated with a higher risk of future infection. Again, all of these studies are observational and so the same criticism as above applies. Similarly, there is a lack of consistent results across all studies although it should be said that the largest studies with the most patients did find an effect. However, in the most recent large study of ~120,000 medicare patients, while there was an increase in the risk of infection with higher doses of iron, the HR for infection for high vs. low dose was only 1.05. None of these studies examined the effect of ongoing iron administration on the severity of infection in dialysis patients. One could say that they are an argument against using iron in dialysis patients (which is not realistic) rather than arguments for stopping during an infection.
Today's email is thus especially welcome. This is a study published in CJASN which examines the relationship between ongoing iron use and severity of infection in Medicare patients. In total, 22,820 individual who had received iv iron in the 14 days prior to admission with a bacterial infection were included. 10% of these patients also received the iron while they were inpatients or shortly afterwards and were included as the cases. The controls were individuals who did not receive iron (i.e the iron was stopped per guidelines). The receipt of iv iron was not associated with increased short or long term mortality, LOS or likelihood of readmission in the next 30 days.
What is the explanation for this finding? One important question is why those 10% of patients did not stop iv iron after admission despite the clear consensus suggesting that it should be done? Looking at the table of demographic and clinical characteristics, there was nothing sticking out that suggested a reason for this difference and there may have been unmeasured characteristics that explained the difference. The outcomes were hard (as they would have to be given the source of the data) but may miss some subtle differences in outcomes related to the infection itself. That said, mortality and readmission are probably the most important outcomes. Finally, again this study was observational and needs to be interpreted as such.
All that said, this is an excellent addition to the literature which raises many more questions that it answers. These are important questions - too much of what we do in nephrology is based on consensus rather than clinical trials. Perhaps this might stimulate someone to do the necessary trial to answer this question once and for all.
Gearoid McMahon
Monday, September 21, 2015
KDPI score and donor urinary biomarkers: poor predictors of graft outcome
KDPI (kidney donor profile index) is a numerical measure that combines ten variables about a donor, including clinical parameters and demographics, to express the quality of the donor kidneys relative to other donors. It has been reported by UNOS in attempt to classify the quality spectrum of cadaveric kidneys in order to use it for better matching with recipients’ characteristics (previously reviewed by Andrew here).
One of the major limitations of the KDPI score is its poor predictive power with a C-statistics of 0.6 (0.5 would be the flip of a coin) (Reese et al. JASN 2015). The score relies heavily on age (diverse renal function within same age group is common), donor terminal creatinine (may reflect acute kidney injury), and it does not take into account HLA matching or other immunological risk factors. Therefore, KDPI score is not a good predictor of graft outcome.
In attempt to improve this prediction, Reese et al. (JASN 2015) conducted a nice prospective study in deceased kidney donors and respective recipients to assess associations between urinary biomarkers (NGAL, KIM1, IL18, L-FABP) in deceased-donor urine with three outcomes: donor AKI, recipient delayed graft function and recipient's graft function at 6 months post-transplant. Although donor urinary injury biomarkers was strongly associated with donor AKI, biomarkers provided limited value in predicting delayed graft function or early allograft function after transplant. The search for better predictive tools continues...
Figure above from Kidney Transplant iBook
One of the major limitations of the KDPI score is its poor predictive power with a C-statistics of 0.6 (0.5 would be the flip of a coin) (Reese et al. JASN 2015). The score relies heavily on age (diverse renal function within same age group is common), donor terminal creatinine (may reflect acute kidney injury), and it does not take into account HLA matching or other immunological risk factors. Therefore, KDPI score is not a good predictor of graft outcome.
In attempt to improve this prediction, Reese et al. (JASN 2015) conducted a nice prospective study in deceased kidney donors and respective recipients to assess associations between urinary biomarkers (NGAL, KIM1, IL18, L-FABP) in deceased-donor urine with three outcomes: donor AKI, recipient delayed graft function and recipient's graft function at 6 months post-transplant. Although donor urinary injury biomarkers was strongly associated with donor AKI, biomarkers provided limited value in predicting delayed graft function or early allograft function after transplant. The search for better predictive tools continues...
Figure above from Kidney Transplant iBook