Tuesday, October 26, 2010

Miracle drug?

A 72 year-old man who received a kidney transplant 5 years ago comes for a routine follow up. He is a quite active guy who recently traveled to Asia. His only major issue after transplant has been repetitive skin cancers requiring frequent resections. Today he complains of increased dyspnea. With the history of travel to Asia, I started to think of all the possible atypical bugs he might have caught in the setting of his immunosuppressed state. However, one aspect of his history was interesting. He had been recently switched to sirolimus in attempt to decrease his skin cancers. Sirolimus is a macrolide antibiotic, produced by Streptomyces hygroscopicus. Its discovery is quite interesting so let me diverge here for a second.

Let’s head to the Pacific Ocean, more specifically Easter Island or Rapa Nui, a volcanic island and a World Heritage Site. This island became famous for its 887 extant monumental statues, called moai (Figure), created by the early Rapanui people. Its original population was completely devastated by the introduction of diseases carried by European colonizers in 1860. A Canadian Medical Expedition in 1964 isolated from its soil a compound that was shown to have antifungal properties. They brought the compound back home and were excited by its potential antimicrobial properties. The compound
failed as an antibiotic, however it demonstrated potent immunosuppressant properties. They called the compound Rapamycin.

When it was first introduced in the market, Rapamycin had the
promise to decrease CNI toxicity and improve allograft outcomes. However, side effects limited its use and the promised improve in graft survival did not prevail in clinical trials.

The
main side effects of sirolimus therapy are thrombocytopenia, hyperlipidemia, edema, and rash. Rapamycin may also cause lung toxicity in up to 5% of patients and it doesn’t seem to be dose-dependent. The exact mechanism is unknown but it usually completely resolves after stopping the drug. Risk factors include late switch from tacrolimus, poor renal function and older age; initial presentation varies between 3-14 months. Finally, rapamycin can also cause increased proteinuria in some patients. The latter is possibly related to podocyte toxicity via inhibition of VEGF and/or via decrease expression of nephrin.

Going back to the case, infectious workup was negative. We decided to stop his sirolimus with the presumed dx of lung toxicity from Rapa. Repeat imaging 2 weeks later showed complete resolution of symptoms. In general, sirolimus should not be used if creatinine is above 2mg/dl or proteinuria is above 500mg/dl. After a switch in transplant recipients, it is necessary to carefully monitor the creatinine since it is a weaker immunosuppressive drug than FK and it has showed an increased incidence of angioedema when combined to ACEI.

Despite all that, rapamycin has shown great promise in multiple fields, including reducing malignancies, decreasing vascular smooth muscle proliferation after stenting, decreasing LVH in patients with CKD, decreasing cysts in PKD and prolonging life!!!

Similar to the Rapa Nui people when the colonizers arrived, progress come with a cost and sometimes it might be greater than we can tolerate.

Backflow

Recently we had a patient who, despite good adherence to dietary restrictions and 4 hour dialysis runs, had a high pre-dialysis potassium and an inadequate URR. She had an AVF with a history of multiple central stenoses and we suspected a recurrence. Her venous pressure wasn’t very high but we asked the nurses to check for recirculation.

Recirculation occurs when blood from the venous side of the circuit re-enters the arterial side thus reducing the efficiency of the dialysis. It is usually a result of poor flow in the access and the commonest cause is a venous stenosis leading to reduced outflow. It can also be caused by inappropriate needle placement and some recirculation is inevitable with the use of dialysis catheters, particularly when the lines are reversed. The calculation of recirculation is based on the idea that if none is occurring, the BUN in the peripheral circulation should be the same as the BUN in the arterial line. If recirculation is present, blood from the venous port mixes with blood entering the arterial port, thus reducing the BUN. The percentage recirculation is therefore calculated from the formula:

R = ([P - A] / [P – V]) x 100

where P= BUN periphery, A= BUN arterial line, V= BUN venous line and R= % recirculation

The issue that arises is from where to take the peripheral sample. Previously, the sample was taken from the contralateral arm. The problem with this is that mixing of blood returning from the AVF with returning systemic venous blood lowers the BUN in the heart relative to the periphery, leading to an overestimation of the BUN of the blood entering the AVF and therefore an overestimation of the percentage recirculation. This problem could be fixed by taking the sample from an artery but this is obviously impractical. Nowadays, most units use a protocol that involves stopping or slowing the blood flow through the dialyzer temporarily in order to take a sample from the arterial line which closely approximates the BUN of blood entering the AVF.

In our unit, the protocol involves taking arterial and venous samples, slowing the blood flow to 100mls/min for 30 seconds, withdrawing 10mls of blood and discarding it and then drawing the ‘peripheral’ sample from the arterial port. Although this technique probably underestimates recirculation slightly, it is sufficient to make the diagnosis. Recirculation of >15% is considered significant.

In this case, the % recirculation approached 50% and the patient had a fistulogram which showed a stenosis in the venous limb of the AVF and her clearances improved following angioplasty.

Monday, October 25, 2010

Mild hypophosphatemia: Does it really cause muscle weakness?

I recently saw a patient with proximal muscle weakness and mild hypophosphatemia, and I wondered whether a serum phosphorous level of 2.2 mg/dL could be the cause. We know that moderate hypophosphatemia is associated with skeletal and smooth muscle dysfunction. Acute hypophosphatemia can lead to rhabdomyolysis, especially in those with chronic phosphate depletion (eg Alcoholics). Severe hypophosphatemia is associated with metabolic encephalopathy and symptoms of irritability, paresthesias, and even coma.

A pubmed search specifically looking for muscle effects of mild hypophosphatemia returned one paper. It examined respiratory muscle function in an inpatient population. The small study evaluated 23 hospitalized patients with serum phosphate levels less than 2.5mg/dl and compared them to a control group of 11 normophosphatemic inpatients. Mean age, length of stay, and number of predisposing factors for hypophosphatemia in each group was not statistically different. Mean initial serum phosphate level for the hypophosphatemic group was 1.9 +/- 0.4mg/dl compared to 3.6 +/- 0.5 mg/dl for the normophosphatemic group, with a correlating statistical difference in the measures of respiratory muscle strength. After about 2 days of repletion, the mean serum phosphate level increased in the hypophosphatemic group to 3.5 +/- 0.8 mg/dl with no difference in the respiratory muscle strength measurements after repletion in the two groups.

  • 43 % (3/7) of those with phosphate levels 2.1-2.5mg/dl demonstrated muscle weakness,
  • 75% (9/12) of those with phosphate levels 1.5-2.0mg/dl demonstrated muscle weakness, and all (4) of those with phosphate levels less than1.5mg/dl demonstrated muscle weakness.
Based on my brief literature search, it appears that even mild hypophosphatemia is associated with skeletal muscle weakness, and that this weakness is quickly reversible with repletion to normal levels.

Mary Lieu M.D.

Friday, October 22, 2010

The Secrets of Tassin

No, this is not the title of the next Dan Brown novel (although the quality of the writing may sometimes compare, you will not find phrases like “the famous man looked at the red cup” here). I’m referring to Tassin, France, a location famous in Nephrology circles for the fact that 95% of dialysis patients there achieve normotension without antihypertensive medication. Nephrologists in Tassin firmly believe in the importance of scrupulous attainment of dry weight (DW) using increased dialysis times. Maybe they have something to teach us.


The publication of the National Cooperative Dialysis Study was a seminal moment in the history of Nephrology. It was on the basis of this RCT that dialysis time was deemed not to be an important predictor of outcomes (based on a p-value of 0.056), and the love affair with Kt/V(urea) effectively began. Nowadays, although Kt/V(urea) targets are slavishly met, hemodialysis patients continue to experience high rates of complications such as hypertension, LVH, CHF, hyperphosphatemia, malnutrition and death. Set this against the superior outcomes seen with longer treatments such as nocturnal HD, and you begin to wonder if they may be onto something in Tassin. There, longer dialysis times aren’t just instituted for their own sake; they permit the attainment of target dry weights that are almost impossible to reach in a shorter session i.e. it’s not the time that’s important, it’s what you do with it that matters. Here I’ll present some of their clinical pearls for achieving DW based on several review articles they have written on the subject:


First, a clinically meaningful definition of DW: "that body weight at the end of dialysis at which the patient can remain normotensive without antihypertensive medication, despite fluid accumulation, until the next dialysis."


  1. At DW, a patient’s BP should remain in the normal range during the entire interdialytic period. If BP remains high after dialysis or is elevated before the next session, they are, by definition, above their DW.
  2. Dialysis session times of 5-6 hours are usually required, particularly when determining the DW for the first time. Trying to achieve the necessary ultrafiltration over a shorter time will cause hypotension and cramping, and lead to treatment failure.
  3. Go slowly! It takes 2-3 months to achieve DW in a new dialysis patient. During this time carefully controlled persistent UF and a strict low salt diet are used, while antihypertensive medications are weaned off entirely.
  4. It is essential that all BP medications be tapered down and stopped early in the process. Otherwise it will be impossible to achieve DW.
  5. Hypotension and cramping will often occur when nearing DW, and are a common cause of treatment failure. These symptoms do not indicate a patient has reached DW, rather the patient has hit their max refill capacity (Crit. lines predict hypotensive episodes, but do not assess dry weight for the same reasons). If a patient remains hypertensive while experiencing such symptoms, longer dialysis times are indicated to achieve UF goals.
  6. Be aware of the “lag phenomenon”. BP does not immediately change in response to changes in volume. Blood pressure may only normalize a few weeks after ECV has returned to normal.
  7. Do not wait for obvious signs of volume overload (oedema, hypertension, etc.). Pay attention to small signs such as headache or slight increase BP at the end of a session.
  8. Weight falls rapidly after initiating dialysis due to saline removal. However, as a rule of thumb, weight should return to pre-initiation levels after 1 year on dialysis due to muscle and fat build up, with BP now under control (see figure).
  9. In difficult cases, ambulatory blood pressure monitoring is an invaluable tool, as it gives the best estimate of the 'true' interdialytic BP (see point 1).
The road to an accurate determination of DW is hard and long. Expect some lightheadedness, cramping and bouts of intense frustration. For the patient, it can be even worse. However, as a great man once said, “Everything is possible. The impossible just takes longer…

Thursday, October 21, 2010

Pancreas transplant pearls

Pancreas transplantation is considered the treatment of choice for patients with refractory Type I Diabetes Mellitus. The first pancreas transplant was described in 1967 by Kelly et al. (pictured) and was performed with a simultaneous kidney transplant in a 28 year old woman with type 1 diabetes. Early efforts were associated with a high complication rate, but this has improved over time. Pancreas transplantation now usually occurs in three settings:

1. Simultaneous Pancreatic and Kidney transplant (SPK) from the same deceased donor

2. Pancreas after Kidney transplant (PAK): generally from two different deceased donors at two different time points

3. Pancreas transplant alone: for patients with severe Type I DM, but relatively spared kidney function.

Recurrent severe hypoglycaemic events are the most common indication. See Melissa’s review on outcomes
here.
Some of the variation in surgical techniques and acute management strategies are discussed below.

Bladder vs enteric duct drainage
Bladder – advantages include the easy availability of urine amylase measurements which can be used as a marker of graft function. Biopsies can be relatively easily obtained across the bladder wall from cystoscopy. The major complication from the bladder-drainage technique is loss of bicarbonate-rich fluid causing metabolic acidosis and volume depletion. Additional problems include bladder leak, reflux pancreatitis, chemical cystitis/urethritis, bladder infections, bladder tumours, bladder calculi, urethral stricture, epididymitis, prostatitis, and prostatic abscesses.

Enteric – due to the relatively high risk of complications with bladder drainage, improvements in immunosuppresison and less need for frequent monitoring, enteric drainage has become the favoured method. Indeed, approximately 35% of bladder drained grafts ultimately require enteric conversion due to complications associated with bladder drainage. Currently, around 80% of SPK’s in the United States are performed with enteric drainage.

Systemic vs Portal drainage of pancreatic venous effluent
The Meeting Place, St. Pancras Station, London
Historically, systemic venous drainage has been the more common procedure, primarily due to easier technical considerations. Systemic venous drainage results in higher overall insulin levels (two to three times higher than with portal drainage), as the secreted blood does not have to pass through the liver first, in comparison to drainage into the portal system. There is limited, but conflicting, evidence over which method has the better outcome.

Immediate outcomes
In most cases of pancreatic transplantation glucose concentrations normalize immediately following implantation of the pancreas graft. However, delayed onset of normoglycaemia can result from size mismatch of the graft, arterial or venous graft thrombosis, graft injury during retrieval or transport, pancreatitis or acute rejection.
HbA1c is usually normal by one month after the operation.

C peptide levels can be measured as a surrogate of insulin levels. Also, if for some reason exogenous insulin reintroduction is required, the C peptide level can be used to monitor for recovery of endogenous insulin production from the graft.

In acute pancreatic rejection, inflammation tends to be directed towards the acinar tissue, rather than the islets in the initial stages of disease – therefore, loss of glycaemic control is a relatively late marker of acute rejection. In SPK patients, it is unusual (less than 15%) for pancreatic rejection to occur in the absence of concomitant kidney rejection; therefore a rising creatinine should ring alarm bells for both organs.

In those with bladder drainage, a fall in urinary amylase of 25% from baseline on two consecutive measurement more than 12 hours apart can signal underlying rejection. The fall in urinary amylase generally occurs 24-48 hours before the development of hyperglycaemia.


Hopefully some of these pearls will prove useful when next managing a new pancreatic transplant recipient on your service.

Finnian Mc Causland MD

Wednesday, October 20, 2010

Disequilibrium syndrome

I had a moment of panic a while back when one of my patients had a seizure after his second ever hemodialysis treatment. I was concerned that perhaps the dialysis disequilibrium syndrome (DDS) might have contributed to the event.

The DDS is an acute neurologic complication of hemodialysis. Symptoms can range from mild (headache, nausea, vomiting, muscle cramps) to severe (altered mental status, seizure, coma and rarely death). These tend to occur during dialysis or shortly after. Minor symptoms can occur in chronic patients but the major forms manifest in previously undialyzed individuals.

Patients at risk for the syndrome include those with very high blood urea nitrogens (>130mg/dl), CKD (as opposed to AKI) and those with high urea clearance rates during their initial hemodialysis session.

DDS is thought to be caused by cerebral edema due to a lag in osmolar shifts between the blood and brain during dialysis, but changes in cerebral pH may also contribute. Preventative strategies include:
1) Low initial urea clearance
2) High initial dialysate sodium and glucose levels
3) Administering osmotically active substances

Clearance of urea can be lowed by slowing the blood and dialysate flow rates, reducing the time of the dialysis session and reducing the size of the dialyzer. An initial urea reduction ratio of 40% is what the Daugirdas Handbook recommends. Using a higher sodium or glucose dialysate allows movement of sodium and glucose into the serum raising serum osmolarity and attenuating the blood brain gradient. Giving mannitol has a similar effect.

In our patient, the initial BUN was in the 150 mg/dL range. Our urea reduction ratios where 30% and 40% on the first and second runs respectively with mannitol and a dialysate sodium of 145 mEq/L used on both. The seizure occurred twelve hours after the second dialysis session.

Further workup did not reveal any clear toxic, cerebrovascular or metabolic cause. I’m not sure what role dialysis played in the seizure but we did do our best to try and minimize the risk of DDS. A final pearl I picked up from the case on seizures post initial HD: the rapid correction of acidosis in undialyzed uremic patients can drop the ionized calcium concentration and precipitate seizures. Our patient’s ionized calcium was also normal.

Graham Abra, MD

Tuesday, October 19, 2010

Appropriate or inappropriate: stop guessing

In the evaluation of patients with hyponatremia (serum Na less than 135 mEq/L), differentiating hypovolemia from euvolemia is often challenging, particularly if the history and physical findings are unrevealing and frequently leads to misdiagnosis. This conundrum exists even for the nephrologists and often leads to guesswork. Measuring the antidiuretic hormone (AVP) levels is not helpful as most cases of hyponatremia have either appropriate or inappropriate elevation of AVP levels and despite frequent reliance on central venous pressure (CVP) measurements to determine the volume status, they are rarely measured in hypovolemic or euvolemic states and their accuracy is debatable.

Reliance on the urine biochemical parameters therefore becomes necessary. Urine electrolytes particularly low urine sodium (less than 30 mEq/L) is often used to distinguish these two conditions but can be misleading, as up to 30% patients with hypovolemia may have elevated urine sodium and on the flip side, up to 40% SIADH patients may have low urine sodium (low salt intake). A recent CJASN review of SAIDH nicely covers this. Urine osmolality is often elevated (greater than 100mosm/kg) and does not distinguish hypovolemia from euvolemia. Recent studies have therefore suggested that a combined use of urine sodium (UNa), fractional excretion of sodium (FENa) and fractional excretion of urea (FEUrea) would best help in differentiating these two states (diuretic use, renal failure, hypocortisolism and hypothyroidism excluded).

1) In patients with adequate urine flow (urine/plasma Cr less than140), a UNa less than 30, FENa less than 0.5% and FEUrea less than 55% indicates hypovolemia.

2) In patients with low urine flow (urine/plasma Cr greater than 140) an even lower FENa (less than 0.15%) and FEUrea (less than 45%) are recommended to capture all the cases of hypovolemia.
Low serum uric acid levels (less than 4mg/dL) and increased fractional excretion of uric acid (greater than 12%) are also useful in differentiating SIADH from hypovolemia, major exception being salt wasting syndromes (click here for review or here for a discussion about CSW vs. RSW). In these patients, unlike SIADH, correction of hyponatremia does not lead to improvement of hypouricemia and uricosuria, likely due to persistent proximal tubular defect causing impairment of uric acid absorption. Phosphaturia (FEPO4 greater than 20%), for the same reason also favors salt wasting, at least initially. Moreover, the plasma renin activity and plasma aldosterone levels are elevated in salt wasting but low in SIADH.

Although the interesting findings in these studies need further validation, they offer a completely different perspective and help us move away from complete reliance on assessment of volume status to make a correct diagnosis of hyponatremia, an exercise that often involves guesswork. Hopefully, the next time when we encounter hyponatremia, this new approach would help us to stop guessing.


Viresh Mohanlal, MD.

Monday, October 18, 2010

Does high dietary fructose cause hypertension?

Average fructose intake in the U.S. is staggering: Americans currently consume approx. 150 pounds of sugar a year, which is 20 times the amount their European ancestors would have ingested from fruit (Data from USDA). This intake is mainly due to added sugar, high-fructose corn syrup (HFCS) in particular, in sugary soft drinks and bakery products. HFCS was introduced in the 1960s and, since then, annual per capita consumption has increased from 0 to 29 kilograms in 2001, while sucrose intake has decreased from 44 to 30 kg.


This increase in fructose consumption correlates with a rise in the prevalence of hypertension and evidence from animal models suggests fructose can raise BP. In a cross-sectional study of 4,528 participants from NHANES without a history of hypertension, dietary fructose ≥ 74 g/d (corresponding to 2.5 sugary soft drinks) was independently associated with elevated SBP after adjusting for demographics, comorbidities, physical activity, total kilocalorie intake, and dietary confounders such as total carbohydrate, alcohol, salt, and potassium intake. Furthermore, in a prospective analysis of 810 adults by Chen et al. cutting out one sugar-sweetened beverage per day was associated with a 1.8/1.1-mmHg reduction in BP over 18 months after controlling for potential confounders.


How fructose might raise BP is not known, but an interesting hypothesis implicates uric acid metabolism. Unlike other sugars, fructose causes an increase in the production of uric acid in animal models. Hyperuricemia increases juxtaglomerular renin production and decreases macula densa neuronal NO synthase expression, causing renal vasoconstriction, sodium retention and increased BP. Persistent renal vasoconstriction and preglomerular microvascular disease leads to the development of salt-sensitive HTN, even after hyperuricemia is corrected. In addition, uric acid may induce vascular smooth muscle cell proliferation and preglomerular arteriopathy following its uptake via probenecid-sensitive urate-transport channels (URAT1) in vascular smooth muscle cells.


Although the investigators in the NHANES study above adjusted for diet, residual confounding may also explain the association between HFCS and HTN i.e. soft drinks go hand in hand with a burger and fries, leading to salt-loading. So, along with a low-salt diet or DASH, is it time to start recommending low-fructose diets to delay pre-hypertension, incident hypertension and metabolic syndrome?


Boonsong Kiangkitiwan M.D.

Thursday, October 14, 2010

MMF and diarrhea

About 40% of patients treated with mycophenolate after transplantation develop GI side-effects, the commonest of which is diarrhea. A number of strategies can be employed to reduce the severity of these complaints. First, the dose can be taken 3 or 4 times a day instead of the usual twice in order to reduce the peak drug level. If this doesn’t work, the dose can be reduced although this could increase the risk of acute rejection.

Myfortic, an enteric-coated mycophenolate was introduced in an attempt to reduce GI side-effects associated with the drug. Initial studies suggested that although it was as effective in preventing rejection, it was not associated with any reduction in GI problems. However, more recent studies have demonstrated that in the subset of patients with severe GI side-effects associated with the drug, switching from mycophenolate to myfortic led to a significant reduction in symptoms. One thing that must be considered prior to switching is the difference in cost. Mycophenolate is available as a generic and the monthly cost is around $100 while myfortic costs around $1000 monthly which puts it beyond the reach of some patients.

Histologically, the appearance is that of an ulcerating colitis and it usually resolves completely on stopping the drug. However, there is a subgroup of patients in whom the colitis persists even after cessation.

So why do some patients treated with mycophenolate get diarrhea? Mycophenolic acid is broken down in the liver to acyl gluconoride which induces TNF-α production in mononuclear cells. TNF-α disrupts the epithelium, activates endothelial cells and promotes inflammation which, when combined with reduced intestinal cell regeneration, may be the mechanism for the colitis. A recent case report in NDT lends credence to this theory. A renal transplant patient developed colitis which persisted 8 weeks after stopping mycophenolate. All other potential causes of colitis were outruled and the patient was then treated with a single infusion of infliximab, a chimeric IgG1 monoclonal antibody which neutralizes the activity of TNF-α. Within 72 hours, the diarrhea had resolved and the patient was eventually discharged on azathioprine.

This is only a single case report and does not necessarily prove the hypothesis but it gives an interesting insight into a difficult and common condition.

Tuesday, October 12, 2010

Is CKD a bubble?

KDOQI guidelines emerged in 2002, classifying CKD in 5 stages based on the presence of kidney damage or GFR below 60ml/min, irrespective of cause. This classification increased the awareness of CKD in the general population and placed the kidney in the spotlight of many health policy discussions. However, one of the concerns of this classification is that it overestimates CKD, particularly in the elderly.

When this classification is applied to the general population, more than 25 million people are estimated to have CKD in the US, with more than 40% of CKD in those over 70 years old. The question than becomes: does the lower GFR in the elderly truly reflects disease or is just a consequence of aging? Does a lower GFR in the elderly without other comorbidities lead to an increased risk of other complications, like CV disease? Is there a way to better classify those patients, taking into account their prognosis? Finally, if we compare 2 patients with an eGFR of 45ml/min and different degrees of proteinuria (below 30mg/d or 700mg/day), do they carry the same risks?

The major changes will be two:

- subdivision of CKD stage III in A (45-59 ml/min) and B (30-44 ml/min).

- addition of albuminuric categories to every CKD stage : normal-30, 30-299, 300-1,999 and greater than 2,000 mg/day.

These changes are hoped to better risk stratify patients, help guide medical care and improve global outcome. By the way, the British have published a similar guideline in 2008.

In general, I liked the new classification, especially since I truly believe that proteinuria is a helpful marker for increased risk and I don’t believe most elderly should be classified as having progressive chronic kidney disease, with subsequent renal consultation. However, I am not sure how is going to affect my daily practice. I will continue to add my ACEI for those with proteinuria… And hope the readers of this blog will discover some new intervention in the near future that would improve my patients' renal outcome… I have faith in you guys!

Monday, October 11, 2010

Tumor lysis syndrome and acute kidney injury

The May issue of AJKD (Vol.55, No.5) had a nice supplement on TLS and AKI. Here is the link.

Tumor lysis syndrome (TLS) describes a condition with significant clinical and lab abnormalities caused by rapid and massive tumor cell death. Occurring either spontaneously or after chemotherapy, TLS is a medical emergency and is associated with significant morbidity and if untreated mortality.

The
risk for patients of developing TLS changes with their type of malignancy:

1. High risk: Burkitt Lymphoma, Lymphoblastic Lymphoma, B-ALL, acute ALL (WBC > 100K), acute AML (>50K monoblastic WBCs)

2. Intermediate risk: Diffuse large B-cell lymphoma, acute ALL (WBC 10-50K), acute AML (WBC 10-50K), CLL (WBC 10-100K, treated with fludarabine), malignancies with rapid proliferation with expected rapid response to therapy

3. Low risk: indolent NHL, acute ALL with WBC <10k>

Definition of TLS
is as follows:

1. Definition of Laboratory TLS (any 2 or more criteria within 3 days before or 7 days after chemotherapy):

- Uric Acid > 8mg/dl or 25% increase from baseline
- Potassium > 5 mEq/L or 25% increase from baseline
- Phosphorus > 6.5 mg/dl (children) or > 4.5mg/dl (adults) or 25% increase from baseline
- Calcium less than 7mg/dl or 25% decrease from baseline

2. Definition of clinical TLS (laboratory TLS plus at least one of the below criteria):
- Creatinine greater than
1.5 value of upper limit of age-adjusted normal range
- Cardiac arrhythmias or sudden death
- Seizure


Acute Kidney Injury in TLS occurs mainly through crystal deposition including
a) uric acid
b) calcium phosphate

Other aggravating factors include:
c) volume depletion / hypotension / CHF
d) extrinsic urinary obstruction
e) pre-existing CKD
f) nephrotoxic medications such as NSAID
g) radiocontrast exposure
h) sepsis

Prevention and treatment of the AKI caused by TLS focuses on reversing above factors, requires volume depletion, and treatment of electrolyte abnormalities (hyperkalemia, hyperphosphatemia). The primary goal is to minimize or treat hyperuricemia by either:

Decreasing production of Uric acid with Allopurinol, which blocks Xanthin oxidase,
and/or
Converting Uric acid into water-soluble Allantoin with Rasburicase (Urate oxidase)

Of these approved treatments for hyperuricemia, the former with allopurinol can lead to Uric acid independent crystal deposition by accumulation of Xanthin products. In addition, elevated urinary Uric acid levels may lead to crystal-independent AKI by by alteration of renal autoregulation and vasoconstriction. Routine urinary alkalinization to prevent Uric acid crystals is controversial since it may increase risk for renal calcium phosphate crystal deposition. Dialysis therapy should be initiated when indicated using standard criteria for AKI.

Acetaminophen & the kidney

As a new renal fellow I’ve felt fairly comfortable with the list of NSAID associated renal conditions. But after taking care of a patient this past month with fulminant hepatic failure due to a Tylenol overdose it’s been interesting to find that acetaminophen has a bit of a list of it’s own…

1) Acetaminophen induced ATN – Our patient presented after intentionally taking 40 grams of Tylenol in a suicide attempt. On presentation he had sediment and urine chemistries supportive of ATN and subsequently developed fulminant hepatic failure.

Acute tubular necrosis has been reported to occur both in the presence and absence of hepatotoxicity. The mechanism of renal injury has not been well defined. Mouse models suggest a possible role of acetaminophen induced endoplasmic reticulum stress with subsequent renal epithelial cell apoptosis.

Unlike liver injury, there is no evidence that N-Acetylcysteine attenuates renal injury.

2) Analgesic nephropathy – One of the many causes of chronic interstitial nephritis.

Analgesic nephropathy appears to result from chronic exposure to at least two anti-pyretic analgesics (one of which is sometimes acetaminophen) along with caffeine or codeine. On CT the kidneys often have a characteristic shrunken bumpy appearance with accompanying papillary calcifications.

3) Metabolic gap acidosis secondary to 5-Oxoproline accumulation5-Oxoproline is an intermediate metabolite in the gamma-glutamyl cycle shown below in a figure from a nice review in CJASN.

The gamma-glutamyl cycle produces glutathione, which is important in the conjugation and urinary excretion of the acetaminophen metabolite NAPQI.

5-Oxoproline accumulation is hypothesized to occur through a variety of mechanisms. Chronic acetaminophen ingestion may lead to a depletion of intracellular glutathione stores. This leads to lack of feedback inhibition of gamma-glutamylcysteine synthetase, which in turn leads to a rise in gamma-glutamylcysteine which is partially converted to 5-Oxoproline.

Malnourishment may play a role by leading to decreases in hepatic glutothione stores. There is possibly some difference between male and female enzyme activity in the gamma-glutamyl cycle that accounts for the female predominance of reported cases and renal dysfunction may lead to a decrease in 5-Oxoproline excretion.

Recognition of this rare entity is key as stopping of acetaminophen often leads to resolution of acidosis. N-acetycysteine has been used in a couple of cases successfully. It theoretically assists by increasing intracellular glutothione stores.

Graham Abra, MD

Friday, October 8, 2010

Key to angiotensin formation solved? Preeclampsia linked to oxidative stress

Researchers from the UK published a new and exciting article exploring one of the fundamental mechanisms of blood pressure regulation by the renin-angiotensin-system (RAS). This research was, per the authors, 20 years in the making and was fittingly published this week in Nature. The RAS is a multi-enzymatic system with multiple layers of control (click here for a review of the RAS from Nate). The first step of this process is the cleavage of angiotensinogen by the enzyme renin to form angiotensin I (Ang I). Precisely how renin acts upon angiotensinogen has not, until now, been completely elucidated.

The researchers were able to almost completely resolve the structure of angiotensinogen (a large 452 AA protein) by x-ray crystallography to the resolution of 2.1A. This showed that the angiotensin cleavage site was inaccessibly buried in the c-terminal tail of this large protein.
They went on to show that when angiotensinogen is oxidized this region changes shape to permit ready access of this site to renin which, in turn, cleaves off a 10 amino acid portion termed Ang I. The supplemental data shows a nice 3D movie of this interaction. The oxidized form of angiotensingen is able to bind to the pro-renin receptor in tissues and has a 4-fold higher catalytic activity for Ang I formation than the reduced form. They authors hypothesize that under conditions of oxidative stress, angiotensinogen is found more often in the oxidized form leading to hypertension.

Lastly, as proof of principle, they showed that patients with preeclampsia had a higher amount of the oxidized form of angioensinogen in their plasma. Typically the ratio of reduced to oxidized angiotensinogen is 40:60 in normal individuals. In preeclampia this ratio is 30:70. They speculate that oxidative stress may be sufficient to cause hypertension associated with preeclampsia. Interestingly, a mutation in the angiotensinogen gene has been reported to cause increased catalytic activity this protein.

In conclusion, this report opens up many new investigative avenues in the field of hypertension research. Not only does this report hint at a possible role for anti-oxidants in hypertension therapy, but a new target for drug modulation of this complex system now exists. However, more research will be needed to confirm how changes in angiotensingen structure leads to alterations in blood pressure homeostasis.

Thursday, October 7, 2010

Is this normal?

When seeing consults on the maternity floor, figuring out the normal values at the various stages of pregnancy can be a frustrating experience. There are well recognized hemodynamic and biochemical changes which occur from the earliest stages of a normal pregnancy and failure to recognize them could lead to misdiagnosis. Up to now, determining what was normal or abnormal could be difficult but last year a group from Texas published a meta-analysis which pulled together various sources to provide a comprehensive list of normal values for commonly used lab tests during the different trimesters.

See this previous post concerning some of the renal adaptations associated with a normal pregnancy.

Board question: Transplant answer


The correct answer is D.
The criteria for expanded-criteria donors (ECD) are:
  • age greater than 60 years old
OR
  • age 50 to 59 years old and at least two of the following:
  1. History of hypertension
  2. Terminal creatinine > 1.5mg/dL
  3. Death from a cerebral vascular accident
This new category of renal transplantation donors was introduced in 2002 to help expand the pool of potential kidney donors. ECD are associated with a 70% higher risk of graft failure compared to non-ECD transplants. About 20% of deceased-donor transplants in 2008 are ECD kidneys in the US. The 3-year graft survival for ECD transplants was 67% in 2003. By comparison, the 3-year graft survival for non-ECD and living-donor transplants is 80 and 88%, respectively. Since the supply of suitable kidney transplants fails to match the increasing demands, nephrologists will need to determine if ECD transplants are appropriate for their patients. The decision must be individualized and patient-centered, since we have not identified the cohort for whom ECD transplants will most benefit. Expect to see a question regarding ECD transplants on the renal boards!

Michael Lattanzio DO

Tuesday, October 5, 2010

Renal sympathetic nerve ablation for refractory hypertension

Hypertension continues to be major public health concern. Efforts to manage this sometimes difficult to treat condition have only started to positively impact patient health. In some instances hypertension can be refractory to intense medical therapy. It is difficult to ascribe exactly why medical therapy is ineffective. Some have postulated that this failure to achieve adequate BP control is due to ineffective physician prescribing patterns and/or patient non-adherence to lifelong meds for an asymptomatic illness. However, a certain subset of patients clearly have hypertension that is not amenable to pharmacological intervention (or termed resistant hypertension). A group in Australia published an interesting article in Lancet (April 2009) about a novel catheter-based technique for renal sympathetic denervation as a new therapeutic avenue for resistant hypertension.

Blood pressure homeostasis is achieved by the coordinated action of several bodily systems and the kidney plays a prominent role. The renal sympathetic efferent nerves contribute to volume and BP homeostasis as they innervate the renal tubules, vasculature, and juxtaglomerular apparatus, all of which can impact BP. Historically, surgical lumbar sympathectomy was used for reduction of “resistant hypertension” before effective antihypertensive medications were available. This approach was complicated by significant side effects, such as postural hypotension, syncope, and impotence. Selective renal denervation may offer help for patients with resistant hypertension. With the emergence of interventional techniques for selective ablation of efferent nerves, enter this intriguing study.
The study was performed in Australia and Europe as a proof-of-principle study. This was NOT a randomized clinical trial.
It showed that this novel catheter-based device produced renal denervation and had a substantial decrease in BP in a select group of 45 patients with resistant hypertension.
  1. Mean baseline office SBP and DBP were 177 ± 20 and 101 ± 15 mm Hg
  2. eGFR was 81 ± 23 mL/min/1.73 m2
  3. Patients were on an average of 4.7 BP meds.
The catheter-based radiofrequency sympathetic never ablation resulted in
  1. Renal denervation with a 47% reduction in renal noradrenaline spillover (a marker of sympathetic efferent activity)
  2. 43/45 had no adverse events. 1 patient had renal artery dissection treated with stent. 1 patient had pseudoaneurysm of the femoral artery.
  3. Office SBP and DBP after the procedure (while maintaining patients on their usual meds) were decreased by 27/17 mm Hg at 12 months
  4. eGFR was reported to be stable from baseline (79 ± 21 mL/min/1.73 m2) to 6 months' follow-up (83 ± 25 mL/min/1.73 m2), with 6 of 25 patients having an increase > 20% in eGFR and only 1 patient with a decrease in eGFR.
  5. Data related to the mechanism of the hypotensive response, such as natriuresis or suppression of renin, angiotensin II, and plasma catecholamines, were not reported.
Catheter based ablation of the renal artery sympathetic nerves offers a novel approach to resistant hypertension. Several limitations are immediately apparent. First, as a proof-of-principle study, a control group was lacking. Secondly, identifying which patients would benefit from such an intervention is not clear. This study was performed in centers with sustantial experience in this procedure. Adverse event rates would likely be much more significant if performed in centers with less experience. I can imagine that damage to renal parenchyma could occur from a variety of mechanisms using this techique (contrast, atheroemboli, bleeding, etc). Lastly, it is not known how long this BP lowering benefit of catheter based ablation would last. I will be curious to see the results of a randomized controlled trial (RCT) of catheter-induced renal sympathetic denervation.

Friday, October 1, 2010

Journals as ranked by nephrology fellows


RFN wanted to know which journals were most helpful for nephrology fellows in training. As a busy nephrology fellow it can be difficult to find time to sift through the vast plethora of neprhology related journals. Interestingly, CJASN was chosen by 53% of the respondents as being most helpful. This is quite a feat in that CJASN's inception date was January of 2006!! A recent podcast by ASN Kidney News discusses the transition of editors from founding editor-in-chief William Bennett to new editor-in-chief Gary Curhan. NEJM came in at a close second at 40%. This was followed by JASN, KI, AJKD and NDT.

Screening tests for multiple myeloma

I recently saw a biopsy for a patient who was suspected of having myeloma-related kidney disease, whose biopsy slides were sent to the pathology department at BWH. Along with them came results from a slew of screening tests, including serum free light chains (SFLCs). Since at BWH we usually screen for monoclonal immune deposition diseases (MIDD) with SPEP and UPEP only, I decided to review the indications and utility of available myeloma screening methods, including SFLCs.

In SPEP, serum is aliquotted onto an agarose gel through which an electrical current is run. Proteins, which are negatively charged, separate out on the gel, with the larger and more negatively charged ones moving faster than the smaller ones. Monoclonal gammopathies are identified when an M spike, or a large quantity of a single globulin (represented by a dense, narrow band on the gel) is found. A positive SPEP is followed by a serum immunofixation (SIFE) assay, in which soluble antibodies against the immunoglobulin are added to the serum, precipitating out the monoclonal protein and allowing better characterization. SPEP sensitivity is good for identifying production of monoclonal immunoglobulins (light chain + heavy chain), which comprise about 80% of multiple myelomas. They are much less sensitive when only a light chain is being produced, which happens in about 15% of myelomas.

In contrast, UPEP is much more sensitive for picking up light chain-producing myelomas. Free urine light chains, also called Bence-Jones proteins, are more readily picked up due to the lower protein concentration in urine, which allows for greater sample concentration. Urine proteins can also be fixed by soluble antibodies for further analysis (UIFE).

How are myelomas that do not secrete light or heavy chains (nonsecretory myelomas, about 3% of total cases) detected? Recently, it has been shown that almost 70% produce detectable levels of serum free light chains (SFLCs). Serum free light chains are detect using an antibody raised against a unique epitope on kappa and lambda light chains that are exposed only on light chains not bound to Igs. SFLCs have also been shown to be more sensitive than UPEP in detecting light chain secretory myelomas: in one series, 25% of patients with light chain MM had positive M spikes on UPEP, versus 54% with positive SFLC tests. The increased sensitivity of SFLCs may be attributable to renal proximal tubular absorption of light chains, lowering their concentration in the urine (and on UPEP).

Are there any disadvantages to the SFLC assay? Two identified by the International Myeloma Working Group include variability of 10-20% between lots of the antibody, and falsely low SFLC levels in states of antigen excess. To my surprise, SFLC tests are not prohibitively expensive: Katzmann et al estimated that SFLCs are about half the cost of UPEP/UIFE assays.

It seems there is a strong case for ordering SFLCs in our patients with renal disease suspected of having a monoclonal gammopathy. I would be curious to know whether other institutions are using it for screening, and their experiences. Renal function needs to taken into account when interpreting SFLCs (kappa light chains are cleared more than lambda). Resulting in a ratio of 0.6 at baseline. This was reviewed by Finnian here.