مواضيع المحاضرة: Malignancy
قراءة
عرض



Malignancy

Malignancy

د. حسين محمد جمعه
اختصاصي الامراض الباطنة
البورد العربي
كلية طب الموصل
2011

May 5, 2011 —

Radical prostatectomy appears to be a wise choice for men with early-stage prostate cancer who are younger than 65 years, according to new data from a Swedish randomized clinical trial that compares surgery with "watchful waiting."
The study shows that, at 15 years, the cumulative incidence of death from prostate cancer was 14.6% among 347 men randomized to prostatectomy and 20.7% among 348 men being observed without treatment.
'Very, Very Important Paper' on Prostate Cancer Surveillance Longest-Running Trial of Watchful Waiting

However, the survival benefit was confined to men younger than 65 years of age, according to the study authors, led by Anna Bill-Axelson, MD PhD, from the University Hospital in Uppsala, Sweden.
For men older than 65 years, survival was highly similar in the 2 groups.


It has been conducted in men with predominantly symptom-detected early prostate cancer. All the men had clinical stage T1 or T2 disease, well or moderately well differentiated histologic findings, and a prostate-specific antigen (PSA) level of below 50 ng/mL.

"This is the best information we have to date on the extent to which treatment will influence outcomes in men with early prostate cancer," said H. Ballentine Carter, MD, from Johns Hopkins University in Baltimore, Maryland. Dr. Carter was not involved with the study and was approached for comment by Medscape Medical News.

However, the results do not rule out the value of watchful waiting or active surveillance, said Dr. Carter.
"The study strongly supports the role of treatment and the role of surveillance," he said.
The results from this study indicate that a patient's age — and related life expectancy — are apparently pivotal to receiving benefit from watchful waiting in early prostate cancer, Dr. Carter explained.He interpreted the study findings for clinicians and patients.

"If you are young and have a long life expectancy — 15 years or more — then you need to be treated for prostate cancer," he said, adding that this even applies to men with very-low- or low-risk prostate cancer.
"If you are an older man — 65 to 70 years old — and you have low- or very-low-risk disease, your first consideration should be whether or not treatment is necessary," he pointed out. Such men should "consider being monitored."

Dr. Carter described the randomized controlled trial as a "very, very important paper," saying it was "very carefully done" research.
The fact that Dr. Carter advocates for possible surveillance among certain older men is in keeping with what he knows from his own experience.
He is the senior author of a recently published study of 769 men enrolled initially in active surveillance in which there have been no known prostate-cancer-specific deaths after an average follow-up of about 3 years.

"This study offers the most conclusive evidence to date that active surveillance may be the preferred option for the vast majority of older men diagnosed with a very low-grade or small-volume form of prostate cancer," he said about his study.

Men With Low-Risk Disease Also Benefited From Treatment

SPCG-4 enrolled men from 1989 to 1999; they now have a median follow-up of 12.8 years, which allowed the authors to make 15-year estimates.
The study authors had previously shown that radical prostatectomy provided a survival benefit as well as a reduction in the risk for metastases (J Natl Cancer Inst. 2008;100:1144-1154). The updated data continue to show these benefits, but over a longer period of time.


The "most important new finding" from SPCG-4 is that a subgroup of men with low-risk disease received a survival benefit from radical prostatectomy, said Matthew Smith, MD, PhD, in an editorial accompanying the study. Dr. Smith is a radiation oncologist at the Massachusetts General Hospital Cancer Center in Boston.

Low risk was defined as a PSA level of less than 10 ng/mL and a tumor with a Gleason score of less than 7 or a World Health Organization grade of 1 in the preoperative biopsy specimens. There were a total of 124 men in the radical-prostatectomy group and 139 in the watchful-waiting group who qualified as low risk.
With respect to death from prostate cancer among these low-risk men, the absolute between-group difference at 15 years was 4.2% points (6.8% for the radical-prostatectomy group vs 11.0% for the watchful-waiting group).

This corresponds to a relative risk of 0.53 (95% confidence interval, 0.24 to 1.14; P = .14), according to the authors.This survival benefit for the low-risk men who received surgery might, however, not be "relevant" for many men who have low-risk prostate cancer detected today, Dr. Smith points out.

That is because most of the men in SPCG-4 had cancers detected on the basis of symptoms rather than by elevated PSA levels.
To illustrate what differences can arise out of these varying methods of detection, Dr. Smith notes that, in SPCG-4, the number needed to treat with prostatectomy to prevent 1 death at 15 years was 15. "The predicted number needed to treat is substantially greater for contemporary men with low-risk prostate cancers detected by PSA screening because the rates of death from prostate cancer are lower in this group," he writes.

Surgery Benefit Only in Younger Men: Novelty Questioned

The study authors say that their finding that only younger men benefited from surgery is novel in the literature. "The finding that the effect of radical prostatectomy is modified by age has not been confirmed in other studies of radical prostatectomy or external-beam radiation" they point out.

They suspect that, contrary to their findings to date, surgery might have some survival benefit for some older men.
"The apparent lack of effect in men older than 65 years of age should be interpreted with caution because, owing to a lack of power, the subgroup analyses may falsely dismiss differences," they write.
The data have hints that surgery has a positive effect in at least some older men, they say.

"At 15 years, there was a trend toward a difference between the 2 groups in the development of metastases," they write about the watchful-waiting and surgery groups.
The study stipulated that men treated with surgery who progressed should receive hormonal therapy (as opposed to observed men who progressed — they received surgery). That might have allowed some men to die from other diseases, say the authors. "Therefore, competing risks of death may blur the long-term effects of treatment," they write.

2 years of rituximab maintenance therapy after immunochemotherapy as first-line treatment for follicular lymphoma significantly improves progression-free survival (PFS) .
The Lancet 21 December 2010


The Science and Art of Prostate Cancer Screening
Despite the clinical availability of prostate-specific antigen (PSA) screening for nearly a quarter of a century, there are still differences of opinion as to whether such screening is worthwhile. In an attempt to directly address this controversy, early results from two large randomized trials were published in 2009 .
• © The Author 2011. Published by Oxford University Press.

The European Randomized Study of Screening for Prostate Cancer (ERSPC) was a combined analysis of prospective randomized European trials consisting of a total of 162 243 subjects aged 55–69 years who were screened or observed at intervals up to 4 years and recommended for biopsies when the PSA levels were elevated (primarily ≥3.0 ng/mL) (1).

The Prostate, Lung, Colorectal, and Ovarian Cancer Screening (PLCO) study enrolled 76 693 men aged 55–74 years but screened them on an annual basis and recommended biopsies for PSA levels greater than 4.0 ng/mL (2). The ERSPC study showed a 20% relative reduction in prostate cancer mortality (rate ratio = 0.80, 95% confidence interval = 0.65 to 0.98, P = .04) (1) in the screened group, whereas the PLCO study did not show any statistically significant change in prostate cancer mortality (rate ratio = 1.13, 95% confidence interval = 0.75 to 1.70)

In this issue of the Journal, Zeliadt et al. (3) examine the use of PSA testing in the Veterans Health Administration system following publication of the ERSPC and PLCO studies. They found that PSA testing declined by 5.5–9.1% (2.2–3.0 absolute percentage points) depending on the age group studied. Although the results were statistically significant, they were relatively small in comparison to changes following the release of information from other large clinical trials (4). For example, prescriptions for estrogen use during the 9 months following publication of the negative results from the Women’s Health Initiative declined by 32% (4).

It should come as no surprise that the amount of change in health-care provider and patient behavior following publication of the ERSPC (1) and PLCO (2) studies was more modest than that observed following the Women’s Health Initiative (4). Overdiagnosis and overtreatment are much more common in prostate cancer screening than in screening for breast, colorectal, or cervical cancer (5). For example, nearly one-third to one-half of PSA screened patients may be overdiagnosed [ie, cancer is found in men who would not have clinical symptoms during their lifetime (6,7)], and most of these men with low-risk prostate cancer proceed to aggressive local therapy (eg, surgery or radiation) (8).

The net effect is that many men (one-third to one-half of the 82/1000 men diagnosed by screening) are needlessly treated to realize a moderate benefit (absolute decrease of 0.71 deaths/1000 men), and some may consider this degree of overtreatment too high. Others, however, have interpreted the data as more equivocal (9) and, in the case of some guidelines (10,11), even supportive of screening. When faced with data that could be interpreted as neither strongly supportive nor decidedly unfavorable, it is natural that health-care providers and their patients might not substantially alter their practices in regard to PSA screening and, therefore, the results of Zeliadt et al. (3) should not be unexpected.

The uncertainty and limitations of PSA screening have long been recognized, and the anticipated benefits, if present, have always been thought to be moderate .These assumptions have been the basis for the requirement for the large sample sizes and extended follow-up in the design of PSA screening trials such as PLCO and ERSPC .

Many have tried to improve on the usefulness of PSA testing by adding other variables (eg, race, family history, or history of prostatic disease) or measures of PSA (eg, free PSA, PSA isoforms, or the rate of rise of PSA levels [PSA velocity]) into the decision-making process, but despite extensive research, no magic formula that incorporates such PSA values or calculations, along with the results of several other variables, has emerged to substantially improve the accuracy of screening.

As a consequence, organizations such as the National Comprehensive Cancer Network (NCCN) have recommended that other variables and derivations of PSA testing be considered but have not provided explicit instructions on how they should actually be used .

Using the control arm of the Prostate Cancer Prevention Trial that randomly assigned healthy men to finasteride or placebo, Vickers et al. (14) in this issue of the Journal assessed whether information about PSA velocity (change in PSA over an 18- to 24-month period) increased the accuracy of screening when added to standard PSA values, digital rectal examination results, family history of prostate cancer, or a history of a prostate biopsy.


The authors found that triggering biopsies based on the commonly recommended PSA velocity threshold of greater than 0.35 ng mL−1 y−1 found in several guidelines would lead to a large number of additional biopsies, with close to one in seven men ultimately receiving a biopsy compared with one in 20 men when 4.0 ng/mL is used as the cutoff .Because PSA velocity did not enhance outcomes or improve the detection of more aggressive cancers ,the authors conclude that PSA velocity did not add predictive accuracy beyond PSA values alone and noted that one would be better off lowering the threshold for biopsy rather than adding PSA velocity as a criterion for biopsy

So how do these two studies influence our clinical practices? PSA testing in the Veterans Health Administration system is less frequent than in the general US population (16) and may also differ in other ways compared with the general US health-care system, so we must be careful not to overgeneralize the results of Zeliadt et al. (3). Nonetheless, the data from Zeliadt et al. (3) suggest that a possible interpretation of the results of the PLCO and ERSPC studies may be that the net results did not clearly argue for or against screening.

Under these circumstances, it would be reasonable to continue to adhere to a shared decision-making model between patient and physician, as recommended by most current guidelines ,when determining whether to proceed with PSA screening.

The results from Vickers et al. suggest that using PSA velocity may not provide more information to either physician or patient as we try to come to a decision about interpreting the results of any screening. In addition, PSA velocity measurements take time to acquire, and recognizing that such data add relatively little information may help prevent inappropriate postponement of follow-up in affected patients. Avoiding the wait to acquire subsequent PSA values may also help reduce some of the anxiety associated with testing.

The studies by Zeliadt et al. and Vickers et al. help us refine and focus our clinical approach, but they also remind us that the use of PSA as a screening tool still leaves much to be desired. Indeed, after more than 20 years of PSA screening, it has been estimated that approximately 1 million men may have been unnecessarily treated for clinically insignificant prostate cancer .
• © The Author 2011. Published by Oxford University Press.

The shortcomings of PSA testing also remind us that there is still much art to the diagnosis and treatment of prostate cancer and that we, like the medieval physician Maimonides, must rely not only on our scientific skills but also on a combination of clear vision, kindness, and sympathy, as we see our patients through this often challenging disease.

Treatment of Localized Prostate Cancer in the Era of PSA Screening

Watchful waiting is appropriate in older men with localized cancers.
Many experts are concerned about overly aggressive treatment of clinically insignificant prostate cancer that is detected by prostate-specific antigen (PSA) screening. Researchers used linked Medicare and cancer registry databases to assess mortality in 14,516 older men (age, >65; median age, 78) with diagnoses of localized (stage T1 or T2) prostate cancer during the early era of PSA testing (1992–2002) who did not die or receive prostatectomy or radiation during the first year after their diagnoses (median follow-up, about 8 years).

For patients with highly, moderately, and poorly differentiated cancers, 10-year prostate cancer–specific mortality was 8%, 9%, and 26%, respectively; mortality from all other causes was about 60% in all three groups. Most men received androgen-deprivation therapy, but only about 2% received chemotherapy, and about 1% underwent spinal surgery or radiation therapy during follow-up.

Comment: Ten-year prostate cancer–specific mortality in older men with localized disease was roughly 60% lower than that in historical controls from the 1970s and 1980s (before PSA screening was used widely). This finding could be due to several factors, including lead-time bias (apparent reduced mortality simply because cancer was diagnosed earlier but followed the same course), changes over time in how prostate cancers are graded, overdiagnosis of less-aggressive disease by PSA testing than by prostate palpation, or improved medical care. In any case, after diagnosis of highly or moderately differentiated cancer in older men, watchful waiting appears to be appropriate.
Published in Journal Watch General Medicine October 15, 2009



Malignancy

Early Detection of Apoptosiseasy differentiation of apoptosis and necrosis

Malignancy


Annexin V Kits…allow for the rapid, specific, and quantitative identification of apoptosis in individual cells. Annexin V is a calcium-dependent phospholipid binding protein. During apoptosis, an early and ubiquitous event is the exposure of phosphatidylserine at the cell surface. Trevigen's Annexin V kits have been cited in over 300 peer-reviewed research articles. Trevigen offers Annexin V conjugated to either FITC or to biotin for the detection of cell surface changes during apoptosis. The Annexin V conjugates are supplied with an optimized binding buffer and propidium iodide.Propidium iodide may be used on unfixed samples to determine the population of cells that have lost membrane integrity, an indication of late apoptosis or necrosis.

Apoptosis Trevigen's TUNEL-based assays are available in several formats with multiple options for labeling and counterstaining. The TACS•XL® kit is based on incorporation of bromodeoxyuridine (BrdU) at the 3’ OH ends of the DNA fragments that are formed during apoptosis. Our TACS® 2 TdT Kits utilize Trevigen’s unique cation optimization system as well as a number of proprietary reagents to enhance labeling within particulartissue types. Kits are available with several different label options, enabling you to create a system that works in your laboratory under your experimental conditions.

TACS® Annexin V Kits

• TACS® Annexin V Kits allow rapid, specific, and quantitative identification of apoptosis in individual cells. Annexin V is a calcium-dependent phospholipid binding protein. During apoptosis, an early and ubiquitous event is the exposure of phosphatidylserine at the cell surface. Trevigen offers annexin V conjugated to either FITC or to biotin for the detection of cell surface changes during apoptosis. The annexin V conjugates are supplied with an optimized Binding Buffer and propidium iodide. Propidium iodide may be used on unfixed samples to determine the population of cells that have lost membrane integrity, an indication of late apoptosis or necrosis.

Diagnostic Criteria, Diagnostic Evaluation, and Staging System for Multiple Myeloma.

Diagnostic criteria
• At least 10% clonal bone marrow plasma cells
• Serum or urinary monoclonal protein
• Myeloma-related organ dysfunction (CRAB criteria)
• Hypercalcemia (serum calcium >11.5 mg/dl [2.88 mmol/liter])
• Renal insufficiency (serum creatinine >2 mg/dl [177 μmol/liter])
• Anemia (hemoglobin <10 g/dl or >2 g/dl below the lower limit of the normal range)
• Bone disease (lytic lesions, severe osteopenia, or pathologic fracture)
nejm.org march 17, 2011


Diagnostic evaluation
Medical history and physical examination Routine testing: complete blood count, chemical analysis with calcium and creatinine, serum and urine protein electrophoresis with immunofixation,quantification of serum and urine monoclonal protein,measurement of free light chains Bone marrow testing: trephine biopsy and aspirate of bone-marrow cells for morphologic features; cytogenetic analysis and fluorescence in situ hybridization for chromosomal abnormalities Imaging: skeletal survey, magnetic resonance imaging if skeletal survey is negative Prognosis Routine testing: serum albumin, β2-microglobulin, lactate dehydrogenase.

Staging

International Staging System
Stage I: serum β2-microglobulin <3.5 mg/liter, serum albumin ≥3.5 g/dl
Stage II: serum β2-microglobulin, <3.5mg/liter plus serum albumin <3.5 g/dl; or serum β2-microglobulin 3.5 to <5.5 mg/liter regardless of serum albumin level
Stage III: serum β2-microglobulin ≥5.5 mg/liter
Chromosomal abnormalities
High-risk: presence of t(4;14) or deletion 17p13 detected by fluorescence
in situ hybridization Standard-risk: t(11;14) detected by fluorescence in situ hybridization.


Malignancy

Suggested Approach to the Treatment of Newly Diagnosed Multiple Myeloma.

Several of the listed drug regimens are currently being evaluated in investigational trials. These include combination induction therapy
with bortezomib and dexamethasone plus cyclophosphamide or lenalidomide, maintenance therapy with thalidomide or lenalidomide in
younger patients, and melphalan–prednisone–lenalidomide followed by maintenance therapy with lenalidomide in elderly patients.


If autologous stem-cell transplantation is delayed until the time of relapse, bortezomib-based regimens should be continued for eight cycles,
whereas lenalidomide-based regimens should be continued until disease progression or the development of intolerable side effects.
The New England Journal of Medicine.

In Western countries, the frequency of myeloma

is likely to increase in the near future as the population
ages. The recent introduction of thalidomide,
lenalidomide, and bortezomib has changed
the treatment paradigm and prolonged survival
of patients with myeloma.
nejm.org march 17, 2011

At diagnosis, regimens that are based on bortezomib or lenalidomide,followed by autologous transplantation,are recommended in transplantation-eligible patients.
Combination therapy with melphalan and
prednisone plus either thalidomide or bortezomib
is suggested in patients who are not eligible
for transplantation.

Maintenance therapy with thalidomide or lenalidomide improves progression-free survival, but longer follow-up is needed to assess the effect on overall survival. At relapse, combination therapies with dexamethasone plus bortezomib, lenalidomide, or thalidomide or with
bortezomib plus liposomal doxorubicin are widely used. In the case of cost restrictions, combinations including glucocorticoids, alkylating agents,or thalidomide should be the minimal requirement for treatment.


The Tumor Lysis Syndrome
The tumor lysis syndrome is the most common disease-related emergency encountered by physicians caring for children or adults with hematologic cancers. Although it develops most often in patients with non-Hodgkin’s lymphoma or acute leukemia, its frequency is increasing among
patients who have tumors that used to be only rarely associated with this complication.

The tumor lysis syndrome occurs when tumor cells release their contents into the bloodstream, either spontaneously or in response to therapy, leading to the characteristic findings of
hyperuricemia, hyperkalemia, hyperphosphatemia,
and hypocalcemia.
These electrolyte and metabolic disturbances can progress to clinical toxic effects, including renal insufficiency, cardiac arrhythmias, seizures,and death due to multiorgan failure.

Pathophysiology

When cancer cells lyse, they release potassium,
phosphorus, and nucleic acids, which are metabolized
into hypoxanthine, then xanthine, and finally
uric acid, an end product in humans (Fig. 1).12
Hyperkalemia can cause serious — and occasionally
fatal — dysrhythmias. Hyperphosphatemia can
cause secondary hypocalcemia, leading to neuromuscular. irritability (tetany), dysrhythmia, and
seizure, and can also precipitate as calcium phosphate
crystals in various organs (e.g., the kidneys,
where these crystals can cause acute kidney injury).


Uric acid can induce acute kidney injury not
only by intrarenal crystallization but also by crystal-independent mechanisms, such as renal vasoconstriction,
impaired autoregulation, decreased
renal blood flow, oxidation, and inflammation.
Tumor lysis also releases cytokines that cause a
systemic inflammatory response syndrome and
often multiorgan failure.

The tumor lysis syndrome occurs when more

potassium, phosphorus, nucleic acids, and cytokines
are released during cell lysis than the body’s
homeostatic mechanisms can deal with. Renal excretion is the primary means of clearing urate,
xanthine, and phosphate, which can precipitate
in any part of the renal collecting system. The
ability of kidneys to excrete these solutes makes
clinical tumor lysis syndrome unlikely without the
previous development of nephropathy and a consequent inability to excrete solutes quickly enough to cope with the metabolic load.

Crystal-induced tissue injury occurs in the tumor

lysis syndrome when calcium phosphate, uric
acid, and xanthine precipitate in renal tubules and
cause inflammation and obstruction .
A high level of solutes, low solubility, slow urine
flow, and high levels of cocrystallizing substances
favor crystal formation and increase the severity
of the tumor lysis syndrome.


High levels of both uric acid and phosphate render patients with the tumor lysis syndrome at particularly high risk for crystal-associated acute kidney injury,
because uric acid precipitates readily in the presence
of calcium phosphate, and calcium phosphate
precipitates readily in the presence of uric acid.
Also, higher urine pH increases the solubility of
uric acid but decreases that of calcium phosphate.
In patients treated with allopurinol, the accumulation
of xanthine, which is a precursor of uric acid and has low solubility regardless of urine pH, can lead to xanthine nephropathy or urolithiasis.

Calcium phosphate can precipitate throughout

the body .The risk of ectopic calcification
is particularly high among patients who receive
intravenous calcium. When calcium phosphate
precipitates in the cardiac conducting system,
serious, possibly fatal, dysrhythmias can occur.

Acute kidney injury developed in our patient as

a result of the precipitation of uric acid crystals
and calcium phosphate crystals and was exacerbated
by dehydration and acidosis that developed
because the tumor lysis syndrome had not been
suspected and no supportive care was provided.



Malignancy



Figure 1. Lysis of Tumor Cells and the Release of DNA, Phosphate, Potassium, and Cytokines.
The graduated cylinders shown in Panel A contain leukemic cells removed by leukapheresis from a patient with T-cell
acute lymphoblastic leukemia and hyperleukocytosis (white-cell count, 365,000 per cubic millimeter). Each cylinder contains
straw-colored clear plasma at the top, a thick layer of white leukemic cells in the middle, and a thin layer of red
cells at the bottom. The highly cellular nature of Burkitt’s lymphoma is evident in Panel B (Burkitt’s lymphoma of the
appendix, hematoxylin and eosin). Lysis of cancer cells (Panel C) releases DNA, phosphate, potassium, and cytokines.
DNA released from the lysed cells is metabolized into adenosine and guanosine, both of which are converted into xanthine.
nejm.1854 org may 12, 2011

Xanthine is then oxidized by xanthine oxidase, leading to the production of uric acid, which is excreted by the kidneys.When the accumulation of phosphate, potassium, xanthine, or uric acid is more rapid than excretion, the tumor lysis syndrome develops. Cytokines cause hypotension, inflammation, and acute kidney injury, which increase the risk for the tumor lysis syndrome. The bidirectional dashed line between acute kidney injury and tumor lysis syndrome indicates that acute kidney injury increases the risk of the tumor lysis syndrome by reducing the ability of the kidneys to excrete uric acid, xanthine, phosphate, and potassium.

By the same token, development of the tumor lysis syndrome can cause acute kidney injury by renal precipitation of uric acid, xanthine, and calcium phosphate crystals and by crystalindependent
mechanisms. Allopurinol inhibits xanthine oxidase (Panel D) and prevents the conversion of hypoxanthine and xanthine into uric acid but does not remove existing uric acid.
In contrast, rasburicase removes uric acid by enzymatically degrading it into allantoin, a highly soluble product that has no known adverse effects on health.

Management

Optimal management of the tumor lysis syndrome
should involve preservation of renal function. Management
should also include prevention of dysrhythmias
and neuromuscular irritability .


Prevention of acute kidney injury
All patients who are at risk for the tumor lysis
syndrome should receive intravenous hydration
to rapidly improve renal perfusion and glomerular
filtration and to minimize acidosis (which lowers urine pH and promotes the precipitation of uric acid crystals) and oliguria (an ominous sign). This is usually accomplished with hyperhydration by means of intravenous fluids (2500 to 3000 ml per square meter per day in the patients at highest risk). Hydration is the preferred method of increasing urine output, but diuretics may also be necessary.

In a study involving a rat model of urate nephropathy with elevated serum uric acid levels induced by continuous intravenous infusion of high doses of uric acid, high rine output due to treatment with high-dose furosemide or congenital diabetes insipidus (in the group of mice with this genetic modification)
protected the kidneys equally well, whereas acetazolamide (mild diuresis) and bicarbonate provided only moderate renal protection (no more
than a low dose of furosemide without bicarbonate).

Hence, in patients whose urine output remains

low after achieving an optimal state of hydration, we recommend the use of a loop diuretic agent (e.g., furosemide) to promote diuresis, with a target urine output of at least 2 ml per kilogram per hour.
Reducing the level of uric acid, with the use of allopurinol and particularly with the use of rasburicase, can preserve or improve renal function and reduce serum phosphorus levels as asecondary beneficial effect. Although allopurinol prevents the formation of uric acid, existing uric acid must still be excreted.

The level of uric acid may take 2 days or more to decrease, a delay that allows urate nephropathy to develop .Moreover, despite treatment with allopurinol, xanthine may accumulate,resulting in xanthine nephropathy.
Since the serum xanthine level is not routinely
measured, its effect on the development of acute
kidney injury is uncertain. By preventing xanthine
accumulation and by directly breaking down uric
acid, rasburicase is more effective than allopurinol
for the prevention and treatment of the tumor
lysis syndrome.


In a randomized study of the use of allopurinol versus rasburicase for patients at risk for the tumor lysis syndrome, the mean serum phosphorus level peaked at 7.1 mg per deciliter (2.3 mmol per liter) in the rasburicase group (and mean uric acid levels decreased by 86%, to 1 mg per deciliter [59.5 μmol per liter] at 4 hours) as compared with 10.3 mg per decili ter (3.3 mmol per liter) in the allopurinol group (and mean uric acid levels decreased by 12%, to 5.7 mg per deciliter [339.0 μmol per liter] at 48
hours).

The serum creatinine level improved (decreased) by 31% in the rasburicase group but worsened (increased) by 12% in the allopurinol
group. Pui and colleagues40 documented no increases in phosphorus levels and decreases in
creatinine levels among 131 patients who were at
high risk for the tumor lysis syndrome and were
treated with rasburicase.

Finally, in a multicenter study involving pediatric patients with advancedstage Burkitt’s lymphoma, in which all patients received identical treatment with chemotherapy and aggressive hydration, the tumor lysis syndrome occurred in 9% of 98 patients in France (who received rasburicase) as compared with 26% of 101 patients in the United States (who received allopurinol) (P = 0.002).33 Dialysis was required in only 3% of the French patients but 15% of the patients in the United States (P = 0.004). At the
time of the study, rasburicase was not available in
the United States.

Urinary alkalinization increases uric acid solubility

but decreases calcium phosphate solubility
(Fig. 1a in the Supplementary Appendix).
Because it is more difficult to correct hyperphosphatemia than hyperuricemia, urinary alkalinization should be avoided in patients with
the tumor lysis syndrome, especially when rasburicase is available.

Whether urine alkalinization prevents or reduces the risk of acute kidney injury in patients without access to rasburicase is unknown, but the animal model of urate nephropathy suggested no benefit. If alkalinization is used, it should be discontinued when hyperphosphatemia develops.


In patients treated with rasburicase, blood samples for the measurement of the uric acid level must be placed on ice to prevent ex vivo breakdown of uric acid by rasburicase and thus a spuriously low level. Patients
with glucose-6-phosphate dehydrogenase deficiency
should avoid rasburicase because hydrogen peroxide,
a breakdown product of uric acid, can cause
methemoglobinemia and, in severe cases, hemolytic
anemia. Rasburicase is recommended as
first-line treatment for patients who are at high
risk for clinical tumor lysis syndrome.

Because of cost considerations and pending pharmacoeconomic studies, no consensus has been reached on rasburicase use in patients who are at intermediate risk for the tumor lysis syndrome; some have advocated use of a small dose of rasburicase in such patients. Patients who are at low risk can usually
be treated with intravenous fluids with or
without allopurinol, but they should be monitored
daily for signs of the tumor lysis syndrome.

Prevention of cardiac dysrhythmias

and neuromuscular irritability
Hyperkalemia remains the most dangerous component
of the tumor lysis syndrome because it can cause sudden death due to cardiac dysrhythmia.
Patients should limit potassium and phosphorus
intake during the risk period for the tumor lysis syndrome. Frequent measurement of potassium
levels (every 4 to 6 hours), continuous cardiac
monitoring, and the administration of oral sodium
polystyrene sulfonate are recommended in
patients with the tumor lysis syndrome and acute
kidney injury.


Hemodialysis and hemofiltration effectively remove potassium. Glucose plus insulin or beta-agonists can be used as temporizing measures, and calcium gluconate may be used to reduce the risk of dysrhythmia while awaiting hemodialysis.
Hypocalcemia can also lead to life-threatening
dysrhythmias and neuromuscular irritability; controlling the serum phosphorus level may prevent hypocalcemia.

Symptomatic hypocalcemia should be treated with calcium at the lowest dose required to relieve symptoms, since the administration
of excessive calcium increases the calcium–
phosphate product and the rate of calcium
phosphate crystallization, particularly if the product is greater than 60 mg2 per square deciliter.

Hypocalcemia not accompanied by signs or symptoms does not require treatment.

Despite the lack of studies that show the efficacy
of phosphate binders in patients with the tumor
lysis syndrome, this treatment is typically given.

Management of severe acute kidney injury

Despite optimal care, severe acute kidney injury
develops in some patients and requires renal replacement therapy .Although the indications for renalreplacement therapy in patients with the tumor
lysis syndrome are similar to those in patients with
other causes of acute kidney injury, somewhat
lower thresholds are used for patients with the
tumor lysis syndrome because of potentially rapid
potassium release and accumulation, particularly
in patients with oliguria.


In patients with the tumor lysis syndrome, hyperphosphatemia-induced symptomatic hypocalcemia may also warrant dialysis.
Phosphate removal increases as treatment
time increases, which has led some to advocate
the use of continuous renal-replacement therapies in patients with the tumor lysis syndrome, including continuous venovenous hemofiltration, continuous venovenous hemodialysis, or continuous venovenous hemodiafiltration.

These methods of dialysis use filters with a larger pore size, which allows more rapid clearance of molecules that are not efficiently removed by conventional hemodialysis.
One study that compared phosphate levels among
adults who had acute kidney injury that was treated
with either conventional hemodialysis or continuous
venovenous hemodiafiltration showed that
continuous venovenous hemodiafiltration more effectively reduced phosphate.

Much less is known about the dialytic clearance of uric acid, but in countries where rasburicase is available, hyperuricemia is seldom an indication for dialysis.
In our patient, once the tumor lysis syndrome was identified, treatment with intravenous fluids,
phosphate binders, and rasburicase prevented
the need for dialysis. Despite a potassium level
of 5.9 mmol per liter, he had no dysrhythmia or
changes on electrocardiography, but had he presented 1 day later, the tumor lysis syndrome may have proved fatal.

Monitoring

Urine output is the key factor to monitor in patients
who are at risk for the tumor lysis syndrome
and in those in whom the syndrome has developed.
In patients whose risk of clinical tumor lysis
syndrome is non-negligible, urine output and fluid
balance should be recorded and assessed frequently.
Patients at high risk should also receive intensive
nursing care with continuous cardiac monitoring
and the measurement of electrolytes, creatinine,
and uric acid every 4 to 6 hours after the start of
therapy.


Those at intermediate risk should undergo
laboratory monitoring every 8 to 12 hours,
and those at low risk should undergo such monitoring
daily. Monitoring should continue over the
entire period during which the patient is at risk
for the tumor lysis syndrome, which depends on
the therapeutic regimen.

In a protocol for acute lymphoblastic leukemia, which featured up-front,single-agent methotrexate treatment, new-onset
tumor lysis syndrome developed in some patients
at day 6 or day 7 of remission-induction therapy
(after the initiation of combination chemotherapy
with prednisone, vincristine, and daunorubicin
on day 5 and asparaginase on day 6).

Decreasing the rate of tumor lysis

with a treatment prephase
Patients at high risk for the tumor lysis syndrome
may also receive low-intensity initial therapy. Slower
lysis of the cancer cells allows renal homeostatic
mechanisms to clear metabolites before they
accumulate and cause organ damage. This strategy,
in cases of advanced B-cell non-Hodgkin’s lymphoma
or Burkitt’s leukemia, has involved treatment
with low-dose cyclophosphamide, vincristine,
and prednisone for a week before the start of intensive
chemotherapy.
nejm.1854 org may 12, 2011


Better Evidence about Screening for Lung Cancer
nejm.org august 4, 2011

In October 2010, the National Cancer Institute

(NCI) announced that patients who were randomly
assigned to screening with low-dose computed
tomography (CT) had fewer deaths from lung cancer than did patients randomly assigned to screening with chest radiography.
The first report of
the NCI-sponsored National Lung Screening Trial
(NLST) in a peer-reviewed medical journal appears
in this issue of the Journal.

Eligible participants were between 55 and 74

years of age and had a history of heavy smoking.
They were screened once a year for 3 years and
were then followed for 3.5 additional years with
no screening. At each round of screening, results
suggestive of lung cancer were nearly three times
as common in participants assigned to low-dose
CT as in those assigned to radiography, but only
2 to 7% of these suspicious results proved to be
lung cancer.


Invasive diagnostic procedures were few, suggesting that diagnostic CT and comparison with prior images usually sufficed to rule out lung cancer in participants with suspicious screening findings.

Diagnoses of lung cancer after the screening period had ended were more common among participants who had been assigned to screening with chest radiography than among those who had been assigned to screening with low-dose CT, suggesting that radiography missed cancers during the screening period.

Cancers discovered after a positive low-dose CT screening test were more likely to be early stage and less likely to be late stage than were those discovered after chest radiography. There were 247 deaths from lung cancer per 100,000 person-years of follow-up after screening with low-dose CT and 309 per 100,000 person-years after screening with chest radiography.

The conduct of the study left a little room for

concern that systematic differences between the
two study groups could have affected the results
(internal validity). The groups had similar characteristics at baseline, and only 3% of the participants in the low-dose CT group and 4% in the radiography group were lost to follow-up. However,there were two systematic differences in adherence to the study protocol.

First, as shown in Figure 1 of the article, although adherence to each screening was 90% or greater in each group, it was 3 percentage points lower for the second and third radiography screenings than for the corresponding low-dose CT screenings. Because more participants in the radiography group missed one or two screenings, the radiography group had more time in which a lung cancer could metastasize before it was detected.

Second, participants in the low-dose CT group were much less likely than those in the radiography group to have adiagnostic workup after a positive result in the second and third round of screening (Table 3 of the article), which might have led to fewer screening-related diagnoses of early-stage lung cancer after low-dose CT. The potential effect of these two differences in study conduct seems to be too small to nullify the large effect of low-dose CT screening on lung-cancer mortality.

The applicability of the results to typical practice

(external validity) is mixed. Diagnostic workup
and treatment did take place in the community.
However, the images were interpreted by
radiologists at the screening center, who had extra
training in the interpretation of low-dose CT
scans and presumably a heavy low-dose CT workload.
Moreover, trial participants were younger
and had a higher level of education than a ran-


dom sample of smokers 55 to 74 years of age,
which might have increased adherence to the
study protocol.
Overdiagnosis is a concern in screening for
cancer. Overdiagnosis occurs when a test detects
a cancer that would otherwise have remained occult,either because it regressed or did not grow
or because the patient died before it was diagnosed.

In a large, randomized trial comparing two screening tests, the proportion of patients in whom cancer ultimately develops should be the same in the two study groups.
A difference that persists suggests that one test is detecting cancers that would never grow large enough to be detected by the other test.

Overdiagnosis is a problem because predicting which early-stage cancers will not progress is in an early stage of development,
so that everyone with screen-detected cancer
receives treatment that some do not need.
Overdiagnosis biases case-based measures (e.g.,case fatality rate) but not the population-based measures used in the NLST.

Overdiagnosis probably occurred in the NLST.

After 6 years of observation, there were 1060 lung
cancers in the low-dose CT group and 941 in the
radiography group. Presumably, some cancers in
the radiography group would have been detectable
by low-dose CT but grew too slowly to be
detected by radiography during the 6.5 years of
observation. The report of the Mayo Lung Project
provides strong evidence that radiographic screening
causes overdiagnosis of lung cancer.


At the end of the follow-up phase in the Mayo study,
more lung cancers were diagnosed in the group
screened with radiography and sputum cytologic
analysis than in the unscreened group. This gap
did not close, as would be expected if undetected
cancers in the unscreened group continued to
grow; the gap grew and then leveled off at 69
additional lung cancers in the screened group at
12 and 16 years.

The Mayo study shows that 10 to 15 additional years of follow-up will be necessary

to test the hypothesis that low-dose CT in the
NLST led to overdiagnosis. If the difference in
the number of cancers in the two groups of the
NLST persists, overdiagnosis in the low-dose CT
group is the likely explanation.
The incidence of lung cancer was similar at
the three low-dose CT screenings (Table 3 of the
article), which implies that a negative result of
low-dose CT screening did not substantially reduce


the probability that the next round would detect cancer. Lung cancer was also diagnosed frequently during the 3 years of follow-up after the third low-dose CT screening. Apparently,every year, there are many lung cancers that first become detectable that year. This observation, together with the overall NLST results, suggests that
continuing to screen high-risk individuals annually
will provide a net benefit, at least until deaths
from coexisting chronic diseases limit the gains
in life expectancy from screening.

The NLST results show that three annual rounds of low-dose CT screening reduce mortality from lung cancer, and that the rate of death associated with diagnostic procedures is low.
How should policy makers (those responsible
for screening guidelines, practice measures, and
insurance coverage) respond to this important
result? According to the authors, 7 million U.S.
adults meet the entry criteria for the NLST,1 and
an estimated 94 million U.S. adults are current
or former smokers.

With either target population,a national screening program of annual low-dose CT would be very expensive, which is why I agree with the authors that policy makers should wait for more information before endorsing lung-cancer screening programs.
Policymakers should wait for cost-effectiveness
analyses of the NLST data, further follow-up data
to determine the amount of overdiagnosis in the
NLST, and, perhaps, identification of biologic
markers of cancers that do not progress.


Modeling should provide estimates of the effect of
longer periods of annual screening and the effect
of better adherence to screening and diagnostic
evaluation. Systematic reviews that include
other, smaller lung-cancer screening trials will
provide an overview of the entire body of evidence.
Finally, it may be possible to define subgroups
of smokers who are at higher or lower risk
for lung cancer and tailor the screening strategy
accordingly.

Individual patients at high risk for lung cancer

who seek low-dose CT screening and their primary
care physicians should inform themselves
fully, and current smokers should also receive redoubled assistance in their attempts to quit smoking.
They should know the number of patients
needed to screen to avoid one lung-cancer death,
the limited amount of information that can be
gained from one screening test, the potential for overdiagnosis and other harms, and the reduction
in the risk of lung cancer after smoking cessation.


The NLST investigators report newly proven
benefits to balance against harms and costs,
so that physicians and patients can now have
much better information than before on which
to base their discussions about lung-cancer
screening.
The findings of the NLST regarding lung-cancer
mortality signal the beginning of the end of
one era of research on lung-cancer screening and
the start of another. The focus will shift to informing
the difficult patient-centered and policy
decisions that are yet to come.
nejm.org august 4, 2011

Cancer Cachexia and Fat–Muscle Physiology

nejm.org august 11, 2011

Cachexia affects the majority of patients with advanced cancer and is associated with a reduction
in treatment tolerance, response to therapy, quality
of life, and duration of survival.
It is a multifactorial
syndrome caused by a variable combination
of reduced food intake and abnormal
metabolism that results in negative balances of
energy and protein.


Cachexia is defined by an ongoing loss of skeletal-muscle mass and leads to progressive functional impairment.
Although appetite stimulants or nutritional support can help reverse the loss of fat, the reversal of muscle wasting is much more difficult and remains a challenge in patient care.

The loss of skeletal muscle in cachexia is the

result of an imbalance between protein synthesis
and degradation. Much recent work has focused
on the ubiquitin–proteasome pathway, the regulation
of satellite cells in skeletal muscle, and the
importance of related receptors and signaling
pathways that are probably influenced by tumorinduced systemic inflammation.

Similarly, the loss of adipose tissue results from an imbalance in lipogenesis and lipolysis, with enhanced lipolysis driven by neuroendocrine activation and tumor-related lipolytic factors, including proinflammatory cytokines and zinc-α2-glycoprotein.

The study of integrative physiology in obesity

and diabetes has long emphasized the importance
of chronic inflammation, increased adipocyte
lipolysis, and increased levels of circulating
free fatty acids in the adipose–muscle cross-talk
that contributes to lipotoxicity and insulin resistance
in muscle. Similarly, studies in exercise
physiology have focused on the molecular crosstalk
between adipose tissue and muscle that occurs
through adipokines and myokines and on
the role these molecules may play in chronic diseases.


Although cachexia in patients with cancer
is characterized by systemic inflammation, increased
lipolysis, insulin resistance, and reduced
physical activity, there has been little effort to
manipulate the integrative physiology of adipose
tissue and muscle tissue for therapeutic gain.
To this end, Das and colleagues recently reported
the results of experiments involving two
mouse models in which the metabolic end of the
cachexia–anorexia spectrum was investigated.

In these mice, during the early and intermediate

phases of tumor growth and cachexia, food intake
remained normal while plasma levels of proinflammatory cytokines and zinc-α2-glycoprotein
rose. The investigators found that genetic ablation
of adipose triglyceride lipase prevented the
increase in lipolysis and the net mobilization of
adipose tissue associated with tumor growth

Unexpectedly, they also observed that skeletal-muscle mass was preserved and that activation of proteasomal-degradation and apoptotic
pathways in muscle was averted. Ablation of
hormone-sensitive lipase had similar but weaker
effects. This study opens up the possibility that
hitherto unrecognized, physiologically important
cross-talk between adipose tissue and skeletal
muscle exists in the context of cancer cachexia.


What is the translational relevance of these
findings? Given the current epidemic of obesity
in Western society in general and in patients
with cancer in particular, the inhibition of fat
loss is probably not a priority in itself. The key
problem remains low muscle mass, with up to
50% of persons with advanced cancer having
frank sarcopenia. Moreover, the shortest survival
times among patients with advanced cancer may
be among obese patients with sarcopenia.

In such patients, any muscle-preserving therapy

that also increases fat mass might not be advantageous.
It should also be considered that the
metabolic response to cancer is heterogeneous,
and a therapy that is tailored to a specific metabolic
abnormality may require specific, individualized
characterization of patients. Moreover,
cachexia has a spectrum of phases (precachexia,
cachexia, and refractory cachexia) and degrees of severity.


Das and colleagues tested the effect of ablation of lipolysis at the onset of tumor growth. Thus, their model does not address the scenario that frequently occurs in clinical practice, in which both cancer and cachexia are well established.
Finally, patients generally receive systemic antineoplastic therapy until a late stage
in their disease trajectory, and the interaction of
this treatment with the development of cachexia
(some treatments may induce muscle wasting) is unknown.


Malignancy

Model of Cachexia and Lipolysis in Tumor-Bearing Mice with Wild-Type Atgl or Atgl−/−.

In tumor-bearing mice with the wild-type gene for adipose triglyceride lipase (Atgl, also known as Pnpla2), a variety of circulating mediators, including cytokines (tumor necrosis factor α and interleukin-6) and zinc-α2-glycoprotein,
activate Atgl, which triggers lipolysis, resulting in net mobilization of white adipose tissue and an increase in plasma levels of free fatty acids. Concomitantly, cachexia — the process of protein catabolism, apoptosis, and muscle atrophy — begins and may be modulated by cross-talk between muscle and adipose tissue mediated by free fatty acids or by various adipokines or myokines. In tumor-bearing mice in which the Atgl −/− gene has been ablated, the same pattern of mediator release fails to activate lipolysis, plasma levels of free fatty acids remain normal, and both white adipose tissue mass and skeletal-muscle mass are maintained. The mechanism through which skeletal-muscle mass is maintained in the presence of the systemic mediators is unknown but may involve muscle-adipose cross-talk
through free fatty acids, myokines, or adipokines. Alternatively, the maintenance of skeletal-muscle mass may be adirect consequence of autonomous lipolysis in defective tissue.

Taken together, these issues point to the importance of understanding the precise mechanism underlying the findings of Das and colleagues.
Traditionally, controlling the advance of cancer
has been viewed as the best way to contain
cachexia. However, symptom management alone
can improve survival in patients with advanced
cancer, and a multifaceted approach to the management of cachexia has already proved to be
partially effective.


The growing understanding of the mechanisms underpinning cachexia has prompted an increasing number of studies, now in phase 1 or phase 2, that use highly specific, potent therapies targeted at either upstream mediators or downstream end-organ hypoanabolism and hypercatabolism. The study by Das and colleagues suggests that achieving a better understanding of the integrative physiology of this
complex syndrome may yield yet further novel
therapeutic approaches.
Nejm.org august 11, 2011

Management of BPH and Prostate Cancer Reviewed

MedscapeCME Clinical Briefs © 2010

November 23, 2010 — Watchful waiting or active surveillance are options in selected patients with benign prostatic hyperplasia (BPH) and prostate cancer, according to a review reported in the December issue of the International Journal of Clinical Practice.
"...BPH and prostate cancer (CaP) are major sources of morbidity in older men," write J. Sausville and M. Naslund, from the University of Maryland School of Medicine in Baltimore. "Management of these disorders has evolved considerably in recent years. This article provides a focused overview of BPH and CaP management aimed at primary care physicians."

BPH may give rise to troublesome lower urinary tract symptoms and/ or acute urinary retention. Acute urinary retention may be associated with an increased risk for recurrent urinary tract infections; bladder calculi; and, occasionally, renal insufficiency.
BPH may be managed with medications, minimally invasive therapies, and prostate surgery.

First-line treatment in men presenting with lower urinary tract symptoms from BPH is typically pharmacotherapy with
alpha-blockers or 5-alpha-reductase inhibitors. Alpha-blockers generally work within a few days by relaxing smooth muscle, whereas 5-alpha-reductase inhibitors may take 6 to 12 months to relieve urinary symptoms. The latter drug class blocks the conversion of testosterone to dihydrotestosterone, thereby shrinking hyperplastic prostate tissue.

Malignant disease in men older than 50 years with lower urinary tract symptoms can largely be excluded by normal results on digital rectal examination, prostate-specific antigen (PSA) blood testing, and urinalysis.
However, elevated PSA levels and/or a nodular prostate may be red flags for prostate cancer, and microscopic hematuria with urinary symptoms may suggest bladder cancer or prostate cancer.

Prostate cancer is a highly prevalent condition, and outcomes may be better with early detection. Although 2 large clinical trials have recently been published supporting screening for prostate cancer,
mass screening is still considered controversial.


"The ageing of the population of the developed world means that primary care physicians will see an increasing number of men with BPH and CaP," the review authors write. "Close collaboration between primary care physicians and urologists offers the key to successful management of these disorders."
On the basis of a review of current literature regarding BPH and prostate cancer, the study authors note that despite increasing use of effective medical treatments, surgical intervention is still a valid option for many men. New technologies have emerged for surgical management.

Open radical retropubic prostatectomy is still the oncologic reference standard, but other well-established surgical procedures include transurethral resection of the prostate as well as use of minimally invasive techniques. A new surgical technique for prostate cancer management, now under more widespread use, is robot-assisted prostatectomy.

Other options for treatment of prostate cancer include radiation therapy, brachytherapy, high-intensity focused ultrasound, and cryotherapy. The review authors also note that not all men with prostate cancer necessarily need to be treated and that watchful waiting or active surveillance may be appropriate in some patients.

Various protocols exist for identifying low-risk CaP and some such patients may be offered active surveillance," the review authors write. "To mitigate the danger of under-grading, these patients typically undergo repeated prostate biopsies at predetermined intervals, and PSA levels and DRE [digital rectal examination] findings are monitored. If progression of disease (increased PSA, PSAV [PSA velocity], or discovery of higher grade or bulkier cancer on biopsy) occurs, definitive therapy is offered."
Int J Clin Pract. 2010;64:1740-1745.

Clinical Context

BPH and prostate cancer are common conditions among older men. More than half of men older than 50 years have BPH, which usually presents clinically with symptoms of lower urinary tract obstruction. Prostate cancer can have a similar presentation, and it is the most common nonskin cancer diagnosed among men in the United States.
The current review highlights the diagnosis and management of BPH and prostate cancer, with a focus on emerging therapeutic options.

Study Highlights

Malignant disease of the prostate can usually be differentiated from BPH with a digital rectal examination, PSA level testing, and a urinalysis. A PSA value of 1.5 ng/mL correlates with a prostate volume of 30 g, which is regarded as enlarged.
Examination of a postvoid residual volume is not necessary among men with suspected BPH, but an elevated postvoid residual volume suggests a worse prognosis of BPH.

The primary management of most cases of BPH is medical therapy. Alpha-blockers can reduce symptoms within days, whereas 5-alpha-reductase inhibitors can take 6 to 12 months to relieve urinary symptoms.
Of these 2 classes of medications, only 5-alpha-reductase inhibitors reduce the risk for urinary retention and the need for prostate surgery.


A recent trial suggested a synergistic effect when alpha-blockers were used with 5-alpha-reductase inhibitors in the treatment of BPH.
Laser technology is being used in the surgical treatment of BPH, particularly among patients at risk for hemorrhage. However, the long-term effectiveness of laser treatment modalities has yet to be established.

All prostate ablative techniques carry the risks for retrograde ejaculation in 50% to 90% of patients and urinary incontinence in 1% of patients.
However, erectile function appears to be less significantly affected by current surgeries.

PSA is commonly used to diagnose prostate cancer, but assigning cutoff values for abnormal PSA is difficult. In 1 large study, 15% of men with a PSA level of less than 4.0 ng/mL had prostate cancer.
PSA levels vary by age and race, and the clinician should be aware of these variations.
A reduction in levels of unbound, or free, PSA is associated with a higher risk for prostate cancer.
5-alpha-reductase inhibitors can reduce PSA levels by approximately 50% after 12 months of treatment.

Recent large trials of screening men for prostate cancer have yielded mixed results regarding the efficacy of this intervention in reducing the risk for prostate cancer-specific mortality. The US Preventive Service Task Force finds insufficient evidence to recommend for or against screening for prostate cancer among men younger than 75 years, but it recommends against screening among men 75 years or older.

Active surveillance, which includes close follow-up of PSA values with subsequent biopsies, is considered a first-line option in the management of prostate cancer in the United Kingdom.

Localized prostate cancer can be treated with surgery or radiation therapy. Radical retropubic prostatectomy has excellent oncologic outcomes and results in urinary incontinence in less than 10% of patients. Approximately half of men have preserved erectile function after this procedure.
Radiation therapy is more widely used now in the treatment of prostate cancer, but it can promote adverse events such as hematochezia, hematuria, and irritative lower urinary tract symptoms.

High-intensity focused ultrasound has received attention as a less invasive treatment of prostate cancer, although 1 series demonstrated a biochemical complete response rate of 92% for this procedure. Cryotherapy is another emerging treatment, although it may promote higher rates of erectile dysfunction vs surgery or brachytherapy.
Luteinizing hormone release hormone analogues are the principal treatment of metastatic cancer.


Clinical Implications
Evaluation of older men with lower urinary tract symptoms should include a digital rectal examination, urinalysis, and PSA level testing. A postvoid residual volume is not usually necessary before the initiation of treatment of BPH.
Localized prostate cancer can be treated with surgery or radiation therapy, both of which are associated with particular adverse events. High-intensity focused ultrasound may not yield adequate rates of biochemical complete response, and cryotherapy may promote higher rates of erectile dysfunction vs surgery or brachytherapy.
MedscapeCME Clinical Briefs © 2010

Early PSA Predicts Prostate Cancer Risk -- But Then What?

Hello, I'm Dr. Gerald Chodak for Medscape. In October 2010, an article was published in Cancer Online [1] that looked at more than 20,000 men from Sweden who had their blood drawn and stored when they were between the ages of 33 and 50. Over time, through 2006, more than 1400 of the men were diagnosed with prostate cancer. The investigators went back and tested the blood samples of these men to see what their PSA [prostate-specific antigen] levels were up to 30 years before the diagnosis of prostate cancer was made.
Medscape Urology © 2010

They found that if the PSA was > 0.63 ng/mL, they had a significant chance of developing cancer or advanced cancer many years in the future. This raises a question: What would you do with this information? The investigators suggest that it could help stratify men according to their risk.
If they have a PSA level less than that cutoff, then they could be followed less often. However, if their PSA was above that value, then more careful follow-up would be needed.

Of course, the problem with the article is that it doesn't address the implications of testing in terms of the long-term outcomes. Although these men were diagnosed with cancer, it's unclear whether this testing process would reduce their chances of dying from the disease. The study does not address the implications for predicting which men will die of prostate cancer.

So, it is another piece of information that might be used to help separate men into low- and high-risk groups, but it doesn't address the issues about screening and about changing the natural history of the disease.
It simply says that if your PSA is higher than 0.63 ng/mL, then you have a greater chance of being diagnosed with prostate cancer some time in the future.
Does that mean everyone should get a baseline test, and use that to make further decisions? I'm not sure that we can answer that at the present time.

That would need a different type of study. However, we come back to the latest meta-analysis that has raised questions about the overall impact of testing and treating men with this disease. The bottom line is that this is interesting information and warrants further evaluation to see what would be the best approach, but it doesn't find all men who have aggressive cancer, and it could turn out that men who have life-threatening disease might not have been detected 30 years earlier by using this PSA cutoff. So, I'm not sure that it should be adopted at this time, but clearly it warrants further evaluation.


December 11, 2010 (San Antonio, Texas) — A cohort of women who underwent either mastectomy or breast conservation therapy (BCT) an average of 25 years ago now have "equivalent" overall survival, according to the authors of new study from a long-term National Cancer Institute trial.
25-Year Results in Early Breast Cancer Surgery

The study of 237 women with stage 1 or 2 breast cancer was presented as a poster here at the 33rd Annual San Antonio Breast Cancer Symposium.
There was no statistically significant difference in overall survival in either group of the study, with 45.7% of patients alive in the mastectomy group and 38.0% alive in the BCT group (P = .43), according to the authors, led by N.L. Simonen, MD, from the National Cancer Institute in Bethesda Maryland.
However, a breast cancer surgeon from the Mayo Clinic said that the difference in survival — despite its lack of statistical importance — was concerning.

"You have to wonder whether this difference would become significant with a larger patient group," said Judy Boughey, MD, who is an associate professor of surgery at the Mayo Clinic's Rochester, Minnesota, campus.
Dr. Boughey found some comfort in other findings that compare the 2 surgical approaches. "The lack of a statistically significant difference in overall survival is in keeping with multiple previous studies," Dr. Boughey toldMedscape Medical News. She attended the meeting and was asked to comment on the poster.

The findings on local recurrence did indicate an important difference between mastectomy and BCT.
Disease-free survival was significantly worse in patients randomly assigned to receive BCT compared with mastectomy (57% vs 82%; P < .001).
The additional treatment failures in the BCT group were primarily isolated ipsilateral breast tumor recurrences, the authors point out. They also note that these recurrences were salvaged by mastectomy. In all, 22.3% of BCT patients experienced such a recurrence. However, "those patients had no significant decrease in overall survival," say the authors.

Talk About Local Recurrence Rate, Especially With Young Women

What is missing from this study is the rate of local recurrence for the mastectomy patients. "We know that it is not zero," said Dr. Boughey, "because some breast tissue remains after mastectomy."

The new study is a reminder of the importance and challenges of counseling women with early breast cancer.
"A lot of women struggle with the choice of surgery," said Dr. Boughey. She reminds women that the overall survival is roughly the same, but the local recurrence rate is significantly higher if they keep the breast. "The risk of local failure exists and needs to be discussed."

For most women, Dr. Boughey recites a set of figures in her local recurrence talk. "I say to patients, if you get a lumpectomy, the risk for local recurrence at 10 years is 8% to 10%, and if you get a mastectomy, the risk is 2% to 4%."
However, when counseling young woman — that is, women younger than 40 years — the discussion about local recurrence is a bit different, and is especially important, Dr. Boughey said.


Dr. Boughey presented a poster at the symposium that indicated the risks for recurrence by decade of life. In the new retrospective study, 6.9% of 3075 patients who underwent breast-conserving surgery at the Mayo Clinic had a local recurrence at a median of 3.4 years.

The frequency of local recurrence by age group was:

Younger than 40 years: 11.9%
Aged 40 to 49 years: 5.9%
Aged 50 to 59 years: 5.9%
Aged 60 to 69 years: 7.6%
Aged 70 years or older: 6.4%

The fact that the youngest women had the highest rate of recurrence is important in part, said Dr. Boughey, because they have more aggressive tumors.
Young women need to know that their risk for local recurrence is higher than other age groups, she said.
The authors have disclosed no relevant financial relationships.
33rd Annual San Antonio Breast Cancer Symposium: Abstracts P4-10-01 and P4-10-02. Presented December 11, 2010.

The tyrosine kinase inhibitor (TKI) imatinib, which targets the enzyme that results from the BCR-ABL mutation and is associated with an overall survival rate of about 89% at 5 years, revolutionized the treatment of chronic myeloid leukemia (CML) during the past decade.[1,2] Since then, newer-generation TKIs, including nilotinib and dasatinib, have been developed for the treatment of CML.[3,4
Community and Expert Perspectives: 2010 Update on Chronic Myeloid Leukemia
MedscapeCME Oncology © 2010 MedscapeCME

Major studies were recently presented at the 2010 meetings of the American Society of Clinical Oncology and European Hematology Association regarding the emergence of these agents, which are both approved for imatinib-intolerant or -resistant disease, as first-line therapy.

Indeed, the results of the 2 phase 3 studies of nilotinib and dasatinib were published in June 2010,[8,9] and nilotinib subsequently received approval from the US Food and Drug Administration for use in the first-line treatment of adult patients with newly diagnosed Philadelphia chromosome-positive CML in chronic phase. At the same time, other agents are in early- and late-stage clinical development, including some that may be effective against the acquired T315I mutation in the ABL kinase domain.


In the wake of these exciting findings, Emma Hitt, PhD, spoke on behalf of MedscapeCME with Richard A. Larson, MD, professor of medicine at the University of Chicago, to discuss the evolving landscape of CML treatment. In addition, Dr. Leon Dragon, Medical Director of the Kellogg Cancer Center, NorthShore University HealthSystem, Highland Park, Chicago, weighed in (available in the accompanying downloadable PDF) on the impact of these findings for the practicing community oncologist.

Medscape: What are some of the issues involved in diagnosing CML?

Dr. Larson: Confirming the diagnosis is the critical first step in managing patients with CML. If the BCR-ABL fusion gene is present, then the behavior of the disease and its response to treatment is more predictable. The stage of the disease is also critical, in terms of both the initial response and the long-term response: patients in chronic phase disease have better outcomes with all of the TKIs than patients with accelerated phase or blast crisis disease.

The diagnosis of CML typically requires a bone marrow exam to confirm the stage of the disease, followed by cytogenetic and molecular analysis to confirm not only the presence of the 9;22 translocation that gives rise to the Philadelphia chromosome but also to assess for other clonal cytogenetic abnormalities that may be present at baseline. These are the prerequisite steps to confirm the diagnosis. Because of the presence of the Bcr-Abl transcripts, quantitative PCR can be used later to monitor levels of residual disease over the course of treatment.

About one third of patients with CML will have no symptoms at the time of initial diagnosis. Others may have symptoms related to splenomegaly or, sometimes, anemia. Some patients may be hyperuricemic because of the hypermetabolic activity of the disease. Consequently, initiation of supportive care, which may include treatment with allopurinol and an attempt to normalize renal function, is important before the treatment of CML-related hyperleukocytosis begins.

Medscape: Please discuss the standard treatment for a patient with CML.

Dr. Larson: The standard initial treatment for the last 10 years has been to use imatinib at a dose of 400 mg/day.[10] Data from recent randomized trials indicate that the response to imatinib at 800 mg/day (ie, 400 mg twice a day) may produce a faster initial response,[11] but over the longer term of 9-12 months, the higher dose does not appear to be more effective than the 400 mg/day dose. This may be, in part, because a dose of 800 mg/day can produce more side effects, and many patients are not able to tolerate the double dose of imatinib as initial therapy.

Clinicians treating patients with CML need to be familiar with management guidelines. The first were published by the European LeukemiaNet in Blood in 2006 and then revised more recently and published in the Journal of Clinical Oncology in 2009. In addition, the European Society of Medical Oncology's Clinical Practice Guidelines for the diagnosis, treatment, and follow-up of CML were issued in 2010.[14

Both of these guidelines establish a series of mileposts, based on the experience to date with standard doses of imatinib, that should be reached by patients to indicate an optimal response. These include an early hematologic response, followed by partial and complete cytogenetic responses, and eventually major molecular response. These terms have now been clearly defined, and it is suggested that if patients are not achieving those mileposts on a standard dose of imatinib, then treatment should be switched. A switch should occur before patients show evidence of treatment failure with disease progression or lack of response.

The likely long-term outcome for an individual patient with chronic phase CML will depend on whether he or she has achieved certain levels of response by 3, 6, 12, and 18 months after starting on imatinib. Previously, the only options available for patients not responding to imatinib included increasing the dose of imatinib from 400 mg/day to 600 or 800 mg/day or, in some cases, considering an allogeneic hematopoietic cell transplant.

More options now exist, following the approval over the past couple of years of the second-generation TKIs (ie, dasatinib and nilotinib) for patients who have either not tolerated imatinib or who have failed to achieve or maintain a response to imatinib. These second-generation TKIs are entering the treatment paradigm earlier, and additional TKIs are in development.

Medscape: Some of the major news out of the American Society of Clinical Oncology (ASCO) 2010 meeting centered on nilotinib and dasatinib. Can you describe those trials and their implications?
Dr. Larson: There were 2 phase 3 trials conducted in first-line chronic phase CML -- ENESTnd, which compared nilotinib with imatinib, and DASISION, which compared dasatinib with imatinib -- that were reported on at ASCO[5,6] and then published in the New England Journal of Medicine in June 2010.[8,9]


In the ENESTnd trial,[8] Saglio and colleagues evaluated the efficacy and safety of nilotinib vs imatinib in 846 patients randomized to receive nilotinib, either at 300 mg twice daily or 400 mg twice daily, or imatinib at a dose of 400 mg once daily.

The dose of imatinib could be escalated to 800 mg/day according to protocol guidelines for treatment response. All of the patients were older than 18 years, and they had good performance status and adequate organ function. Importantly, a baseline electrocardiogram (ECG) was required, and patients were eligible only if their corrected QT interval on ECG was less than 450 milliseconds. The reason for the ECG is that these TKIs appear to have the potential to prolong the QT interval. Consequently, patients had to have a QTc interval that was well within the safe limit in order to enroll in the trial.

This study was a rigorous assessment of the efficacy and tolerability of these drugs. All of the major analyses were performed according to the principle of intention-to-treat. A major molecular response, the primary endpoint, was defined as a BCR-ABL transcript level of less than 0.1% on the International Scale.

Secondary endpoints included the complete cytogenetic response rate. This assessment required at least 20 metaphase cells to be evaluable to confirm a complete cytogenetic response; unavailable or insufficient samples were considered as a lack of response. Because this was an international trial, and florescence in situ hybridization (FISH) using probes for BCR-ABL was not always available, FISH analyses were not allowed. Thus, only cytogenetic analyses on metaphase cells from a bone marrow aspirate were used to assess the cytogenetic response rate.

The major molecular response at 12 months for nilotinib was 44% for the 300 mg twice daily dose and 43% for the 400 mg twice daily dose vs 22% for imatinib (P < .001 for both comparisons). Likewise, complete cytogenetic response at 12 months was also higher with nilotinib: 80% for the 300 mg twice daily dose and 78% for the 400 mg twice daily dose compared with 65% for imatinib (P < .001 for both comparisons).

Nilotinib also significantly reduced the incidence of progression to accelerated phase or blast crisis compared with those receiving imatinib. Nilotinib was associated with more headaches and dermatologic events compared with imatinib, whereas patients receiving imatinib had more gastrointestinal toxicities and fluid retention. Sequential monitoring of ECGs and echocardiograms showed no clinically significant cardiac side effects

Of note, a companion pharmacokinetics study of the ENESTnd cohort[15] found relatively little advantage, in terms of plasma drug levels, between 300 mg and 400 mg of nilotinib each given twice daily.
The patients on this trial are continuing on treatment and will be followed for at least 5 years.[7]Some of those patients have now been on the study beyond 24 months, and the cumulative incidence of a major molecular response is currently estimated to be 66% on the nilotinib 300 mg twice daily arm and 62% on the nilotinib 400 mg twice daily arm, compared with 40% on the imatinib arm.

As yet, there is no evidence that the response rate on the imatinib arm is catching up to the response rate on the nilotinib arms, and a highly significant difference in favor of the nilotinib arms remains for this endpoint.

In the phase 3 DASISION trial with dasatinib, Kantarjian and colleagues compared dasatinib at a dose of 100 mg once daily vs imatinib at a dose of 400 mg once daily in 519 patients with previously untreated chronic-phase CML.Patients were enrolled at 108 centers in 26 countries. The eligibility for this trial was very similar to the nilotinib ENESTnd study. However, because patients were stratified by the Hasford risk score, whereas in the ENESTnd trial they were stratified by the Sokal score, it is difficult to compare data from these trials directly.

Another difference between the trials was that the primary endpoint in the dasatinib trial was confirmed complete cytogenetic response (CCyR) rather than a major molecular response (which was a secondary endpoint). Cytogenetic response is clearly a valid surrogate for clinical benefit in CML, and this trial required that the cytogenetic response be confirmed by 2 assessments of consecutive complete cytogenetic response taken more than a month apart.


After a minimum 12 months of follow-up, the rate of CCyR was 77% with dasatinib, compared with 66% for imatinib (P = .007). The rate of CCyR observed on at least one assessment was 83% vs 72% with each agent, respectively (P = .001). Major molecular response was also significantly higher with dasatinib than with imatinib (46% vs 28%, P < .0001), and was achieved more rapidly with dasatinib, results that are similar to those observed in the trial with nilotinib. There were few grade 3 or 4 toxicities on either arm of the trial; gastrointestinal side effects, which can be troublesome with imatinib, occurred less frequently on the dasatinib arm, as did myalgias and other musculoskeletal pains.

The investigators also assessed changes from baseline in the QTc interval on the 2 arms of the trial. There appeared to be no clinically important differences between the dasatinib and the imatinib arms. Although there were a few cases of QTc prolongation, they occurred with the same frequency on both arms.
Both of these studies, therefore, suggest that nilotinib and dasatinib represent appropriate alternatives to imatinib in the front-line treatment of chronic-phase CML.

Medscape: Now that the options for first-line therapy have expanded, how do you make a selection about which agent to use?
Dr. Larson: All 3 drugs, imatinib, nilotinib, and dasatinib, are highly effective against newly diagnosed chronic phase CML. Indeed, the second-generation drugs are more potent and seem to induce more rapid responses and, perhaps most importantly, fewer early progressions to accelerated or blast phase disease.

On the other hand, imatinib has more than a 10-year track record. So what do you do?

One place to start might be by assessing the risk score for individual patients with CML. There are 2 systems for stratifying chronic phase CML patients. One is the Sokal risk score, and the other is the Hasford or Euro score. These scores take into account a patient's age, spleen size, the percentage of blasts in the blood, and platelet count at the time of diagnosis and sort them into low-, intermediate-, or high-risk groups.

Although these risk stratification schemes were developed prior to the emergence of TKIs, they both seem to predict the response to both first- and second-generation TKIs. So, one approach might be, based on the risk assessment, to start a patient with low-risk CML with imatinib, whereas those with higher-risk chronic phase disease could be started on a more potent second-generation TKI. I should caution, however, that this strategy has not been validated in clinical trials.

It is more likely at this time that physicians will make decisions based on the side-effect profiles of the agents. The second-generation TKIs appear to have different side-effect profiles than imatinib in both of the front-line randomized trials. For example, patients beginning on imatinib often have problems with fluid retention, peripheral edema, or periorbital edema, and also sometimes rashes and myalgias, gastritis, and diarrhea.

Although these side effects may not be severe, they can be troublesome and chronic, lasting for weeks or months. Although the incidence of rashes and myalgias appears to be lower with the second-generation drugs in the front-line setting, nilotinib and dasatinib have their own set of side effects. Nilotinib has been associated with hyperbilirubinemia and an increase in liver transaminases and lipase and amylase elevations, although clinical pancreatitis is rare.

Hyperglycemia also occurs. Most of the laboratory side effects are mild and can usually be managed without discontinuing treatment. Patients on dasatinib also show fluid retention, particularly in terms of pleural or pericardial effusions, but unlike the situation in more advanced disease, most cases appear to be grade 1 or 2. Overall, the side effects that were seen when the second-generation drugs were used as second-line therapy have also been observed in the front-line trials, but in each case they appear to be less severe.

For newly diagnosed patients, one of the decision points has to be the long-term safety of these drugs, given that the track record for the front-line use of imatinib now extends beyond 10 years. The follow-up is clearly shorter for the second-generation drugs, so it will be important that both of the recently reported randomized phase 3 trials continue to report updated data at regular intervals regarding tolerability and any late complications with these agents.


An additional concern about using the more potent drugs in the front-line setting is that there is no backup available if these drugs fail, whereas if imatinib were used initially, then for those patients who either cannot tolerate it or who develop resistant disease, nilotinib or dasatinib could be used.

Medscape: Another second-generation TKI, bosutinib, has generated interest recently. Can you discuss it?
Dr. Larson: Data were presented at the recent ASCO and EHA meetings on another second-generation TKI called bosutinib, which is a selective BCR-ABL kinase and SRC kinase inhibitor.[16] This agent lacks some of the collateral side effects observed with the other CML TKIs, in that it does not inhibit platelet-derived growth factor receptor (PDGFR) kinase or KIT kinase.

Data from a randomized trial of bosutinib vs imatinib in newly diagnosed chronic phase CML patients have not yet been reported but may be available at a later ASH meeting. Several phase 1 and phase 2 studies with bosutinib reported at ASCO suggest that the drug may be potent for patients with CML with imatinib resistance or imatinib intolerance in the chronic, accelerated, and blast phases of CML.One troublesome side effect with bosutinib appears to be gastrointestinal toxicity, ie diarrhea, in the majority of patients. So, it will be interesting to see if that drug also proves beneficial in the front-line setting.

Medscape: How do you manage patients who do not respond to TKIs, such as those with the T315I mutation?
Dr. Larson: The one acquired ABL kinase mutation that does not respond well to any of the drugs we have discussed is the T315I mutation. However, several drugs in development, including AP24534 (ponatinib) and omacetaxine, once called homoharringtonine, appear to be effective against CML with that mutation.

The development of those drugs will be of interest. There are also other agents in even earlier phases of development. Because currently available treatment options for patients with CML with a T315I mutation are still quite limited, these patients should be evaluated for an allogeneic hematopoietic cell transplant, ideally before their disease progresses to an advanced stage.

Medscape: What is the take-home message for practicing clinicians?

Dr. Larson: I think the most important part of treating patients with CML is the need for a continuous monitoring of response. Clearly, with these very potent medicines, patients need to be monitored to ensure that they achieve the important clinical milestones on schedule.

If patients' responses are deviating from the optimal scenario, there are good alternative therapies that can be used to provide them with the best possible long-term outcome. Similarly, patients who are having unpleasant side effects or toxicities can now switch therapy in order to manage those side effects.

Finally, there is the important point of adherence to treatment. That is, these drugs only work if patients take them, and it is clear that if patients are not rigorous in adhering to their regimen, that affects the drug's ability to achieve an antileukemic response and therefore the patient's long-term outcome. We know from experience with antihypertensive drugs and other medicines that long-term adherence to oral prescription medicines is problematic. Yet the consequences of not taking CML drugs include a rapid development of resistance and a worse long-term outcome.

Mortality results from the Göteborg randomised population-based prostate-cancer screening trial
The Lancet Oncology, Early Online Publication, 1 July 2010


Summary
Background
Prostate cancer is one of the leading causes of death from malignant disease among men in the developed world. One strategy to decrease the risk of death from this disease is screening with prostate-specific antigen (PSA); however, the extent of benefit and harm with such screening is under continuous debate.

Methods

In December, 1994, 20 000 men born between 1930 and 1944, randomly sampled from the population register, were randomised by computer in a 1:1 ratio to either a screening group invited for PSA testing every 2 years (n=10 000) or to a control group not invited (n=10 000). Men in the screening group were invited up to the upper age limit (median 69, range 67—71 years) and only men with raised PSA concentrations were offered additional tests such as digital rectal examination and prostate biopsies.

The primary endpoint was prostate-cancer specific mortality, analysed according to the intention-to-screen principle. The study is ongoing, with men who have not reached the upper age limit invited for PSA testing. This is the first planned report on cumulative prostate-cancer incidence and mortality calculated up to Dec 31, 2008. This study is registered as an International Standard Randomised Controlled TrialISRCTN54449243.

Findings

In each group, 48 men were excluded from the analysis because of death or emigration before the randomisation date, or prevalent prostate cancer. In men randomised to screening, 7578 (76%) of 9952 attended at least once. During a median follow-up of 14 years, 1138 men in the screening group and 718 in the control group were diagnosed with prostate cancer, resulting in a cumulative prostate-cancer incidence of 12·7% in the screening group and 8·2% in the control group (hazard ratio 1·64; 95% CI 1·50—1·80; p<0·0001).

The absolute cumulative risk reduction of death from prostate cancer at 14 years was 0·40% (95% CI 0·17—0·64), from 0·90% in the control group to 0·50% in the screening group. The rate ratio for death from prostate cancer was 0·56 (95% CI 0·39—0·82; p=0·002) in the screening compared with the control group. The rate ratio of death from prostate cancer for attendees compared with the control group was 0·44 (95% CI 0·28—0·68; p=0·0002). Overall, 293 (95% CI 177—799) men needed to be invited for screening and 12 to be diagnosed to prevent one prostate cancer death.

Interpretation

This study shows that prostate cancer mortality was reduced almost by half over 14 years. However, the risk of over-diagnosis is substantial and the number needed to treat is at least as high as in breast-cancer screening programmes. The benefit of prostate-cancer screening compares favourably to other cancer screening programs.

Magnetic resonance mammography

Routine use in newly diagnosed breast cancer patients is unsupported
Magnetic resonance imaging enables high definition scanning of tissue without the use of ionising radiation. In the past decade it has become widely used in breast imaging and is a sensitive method of visualising the breast parenchyma and highlighting areas of pathology.
: BMJ 2010;341:


Magnetic resonance mammography is now the optimum imaging modality when combined with mammography and ultrasound for screening women at high risk as a result of genetic abnormalities that predispose them to breast cancer.

The technique can detect occult carcinoma not seen with conventional imaging; it is a useful imaging tool for patients who present with metastatic axillary lymphadenopathy and an occult primary tumour in the breast; and it is useful when assessing response to neoadjuvant chemotherapy. However, its routine use in the management of patients with early stage breast cancer may be unwarranted—we have no evidence to support a clear benefit in this setting⇓.


Malignancy



The Comparative Effectiveness of Magnetic Resonance Imaging in Breast Cancer (COMICE) trial was a multicenter randomised controlled trial conducted in the United Kingdom to assess the impact of magnetic resonance mammography in patients with breast cancer who were thought to be amenable to breast conserving treatment after standard triple assessment. Outcomes were the incidence of reoperation and mastectomy.2

The study found no difference in reoperation rates with or without magnetic resonance mammography (19% (153/816) v 19% (156/807); odds ratio 0·96, 95% confidence interval 0·75 to 1·24), although the rate of mastectomy was higher in the magnetic resonance mammography group (7% (58/816) v 1% (10/807)).2 This study is currently the only randomised controlled trial that has examined the role of preoperative magnetic resonance mammography in the management of early stage breast cancer.

Results from the Mayo clinic mirror those seen in the COMICE trial; from 1997 to 2003 rates of mastectomy steadily declined from 45% to a low of 31% after which this trend reversed. By 2006 the rate had increased to 43%, and this directly correlated with an increase in the use of preoperative magnetic resonance mammography.

This Mayo clinic trial examined patients treated at their centre and used a multiple logistic regression model to assess the effect of magnetic resonance imaging on surgery type, while adjusting for potential confounding variables. Magnetic resonance mammography identifies occult disease in the breast that may not be visible on other imaging modalities, and this may lead to inappropriate treatment decisions.

Invasive lobular carcinoma classically infiltrates the breast in a diffuse manner and may have a multicentric or a multifocal growth pattern. For this reason lobular carcinoma is commonly regarded as a good indication to perform magnetic resonance mammography. One single centre audit series found that 46% of patients with lobular breast cancer had their surgical management altered as a result of undergoing magnetic resonance mammography, although the study did not examine oncological outcome.
: BMJ 2010;341:

Lobular carcinoma may show subtle clinical and mammographic changes, and in this setting magnetic resonance mammography is probably useful when planning treatment, although high quality evidence showing that the routine use of this technique improves patient is lacking.


Advocates of magnetic resonance mammography suggest that identifying otherwise occult disease in the breast may improve oncological outcome, an early marker of which is disease recurrence in the treated breast. Combined analysis of patients with early stage disease from the national surgical adjuvant breast and bowel project (NSABP), before the use of magnetic resonance mammography, has shown low rates of recurrence—5% at 10 years in patients receiving optimum multimodal treatment. With such low rates, recurrence in the treated breast is unlikely to be improved greatly by the use of this new technique.

Furthermore, recent data suggest that the presence of specific diseases with a poor prognosis (HER2 positive disease or triple negative disease) has more effect on local control than occult disease burden; specific molecular features of the breast cancer cells are more important than the size of the tumour for the risk of recurrence. Patients with HER2 positive disease or triple negative disease (negative for the oestrogen receptor, progesterone receptor, and HER2) have dramatically higher rates of local recurrence after surgery for breast cancer.

Radiotherapy reduces local recurrence by 66%; this effect is due to the clearance of occult low volume disease—tiny residual areas of invasive or in situ disease that without radiotherapy may evolve into recurrence. There is no evidence that magnetic resonance mammography will improve local control after breast conserving treatment.

Magnetic resonance mammography can also detect occult disease in the contralateral breast. Undiagnosed cancer is found in the contralateral breast in 3.1% of patients when magnetic resonance mammography is performed within one year of diagnosis.8 However, rates of contralateral disease are the same (6%) at eight years regardless of whether or not magnetic resonance mammography was performed at diagnosis.9

Occult contralateral disease detected by magnetic resonance mammography at the time of diagnosis may therefore be irrelevant. Adjuvant treatment, tamoxifen, and aromatase inhibitors all improve local and systemic control and also reduce contralateral disease.

Again, the notion that detecting occult disease with magnetic resonance mammography would benefit patients was not borne out after longer patient follow-up.
Magnetic resonance mammography can detect occult disease in the breast and surpasses conventional imaging in this aim. Currently, however, there is no compelling evidence that this technique should be used routinely in patients with newly diagnosed breast cancer.

Head and neck cancer—Part 1: Epidemiology, presentation, and prevention

BMJ 2010;341

Summary points

The incidence of head and neck cancer is relatively low in developed countries and highest in South East Asia The main risk factors are smoking and heavy alcohol consumption Incidence of human papillomavirus related oropharyngeal carcinoma is rising rapidly in developed countries and is easily missed. It has a different presentation and better prognosis than other head and neck cancers Patients with head neck cancer often present with hoarseness, throat pain, tongue ulcers, or a painless neck lump, and symptoms for longer than three weeks’ duration should prompt urgent referral .No strong evidence supports visual examination or other screening methods in the general population

Head and neck cancers include cancers of the upper aerodigestive tract (including the oral cavity, nasopharynx, oropharynx, hypopharynx, and larynx), the paranasal sinuses, and the salivary glands. Cancers at different sites have different courses and variable histopathological types, although squamous cell carcinoma is by far the most common.


The anatomical sites affected are important for functions such as speech, swallowing, taste, and smell, so the cancers and their treatments may have considerable functional sequelae with subsequent impairment of quality of life. Decisions about treatment are usually complex, and they must balance efficacy of treatment and likelihood of survival, with potential functional and quality of life outcomes. Patients and their carers need considerable support during and after treatment.

In this first part of a two article series, we review the common presentations of head and neck cancer. We also discuss common investigations and new diagnostic techniques, as well as briefly touching on screening and prevention. In this review, we have used evidence from national guidelines, randomised trials, and level II-III studies. We have limited our discussions to squamous cell carcinoma of the head and neck, which constitutes more than 85% of head and neck cancers.

How common is head and neck cancer and who gets it?

Cancer of the mouth and oropharynx is the 10th most common cancer worldwide, but it is the seventh most common cause of cancer induced mortality.
In 2002, the World Health Organization estimated that there were 600 000 new cases of head and neck cancer and 300 000 deaths each year worldwide, with the most common sites being the oral cavity (389 000 cases a year), the larynx (160 000), and the pharynx (65 000).

The male to female ratio reported by large scale epidemiological studies and national cancer registries varies from 2:1 to 15:1 depending on the site of disease.w1 The incidence of cancers of the head and neck increases with age. In Europe, 98% and 50% of patients diagnosed are over 40 and 60 years of age, respectively.w2

What regions have the highest incidence?

A high incidence of head and neck cancer is seen in the Indian subcontinent, Australia, France, Brazil, and Southern Africa (table⇓). Nasopharyngeal cancer is largely restricted to southern China. The incidence of oral, laryngeal, and other smoking related cancers is declining in North America and western Europe, primarily because of decreased exposure to carcinogens, especially tobacco.2 In contrast, because of the 40 year temporal gap between changes in population tobacco use and its epidemiological effects, the worst of the tobacco epidemic has yet to materialise in developing countries.

WHO projections estimate worldwide mortality figures from mouth and oropharyngeal cancer in 2008 to be 371 000. This is projected to rise to 595 000 in 2030 because of a predicted rise in mortality in South East Asia (182 000 in 2008 to 324 000 in 2030). Modest rises are predicted in Africa, the Americas, and the Middle East, whereas mortality in Europe is expected to remain stable.

Data for head and neck cancers in 2004 in WHO regions*

Several retrospective analyses of samples collected from patients recruited in randomised trials, as well as retrospective patient series, have shown recent changes in epidemiology and pathogenesis of head and neck cancers related to the human papillomavirus (HPV), especially oropharyngeal carcinoma.

A rapid rise in HPV related oropharyngeal cancers in particular has been shown in epidemiological studies from the developed world.4 For example, the United Kingdom has seen a doubling in the incidence of oropharyngeal cancer (from 1/100 000 population to 2.3/100 000) in just over a decade.

A recent retrospective study showed a progressive proportional increase in the detection of HPV in oropharyngeal squamous cell carcinomas in Stockholm over the past three decades: 23% in the 1970s, 29% in 1980s, 57% in 1990s, 68% between 2000 and 2002, 77% between 2003 and 2005, and 93% between 2006 and 2007. Other prospective studies, such as ones from the United States, have also reported proportional increases.


What are the risk factors for head and neck cancer?
Tobacco and alcohol
The major risk factors are tobacco (smoking and smokeless products such as betel quid) and alcohol. They account for about 75% of cases, and their effects are multiplicative when combined. Smoking is more strongly associated with laryngeal cancer and alcohol consumption with cancers of the pharynx and oral cavity.w4 Pooled analyses of 15 case-control studies showed that non-smokers who have three or more alcoholic drinks (beer or spirits) a day have double the risk of developing the disease compared with non-drinkers .

Genetic factors

Most people who smoke and drink do not develop head and neck cancer, however, and a genetic predisposition has been shown to be important. The International Head And Neck Cancer Epidemiology Consortium (INHANCE) carried out pooled analyses of epidemiological studies that examined risks associated with the disease. This work confirmed the role of genetic predisposition that had been suggested by small studies. A family history of head and neck cancer in a first degree relative is associated with a 1.7-fold (1.2 to 2.3) increased risk of developing the disease.

Genetic polymorphisms in genes encoding enzymes involved in the metabolism of tobacco and alcohol have been linked with an increased risk of the disease. For example, a meta-analysis of 30 studies showed that a polymorphism in GSTM1, which encodes a protein involved in the metabolism of xenobiotics (glutathione S transferase), was associated with a 1.23 (1.06 to 1.42) increased risk of developing head and neck cancer.w5

Viral infection

Viral infection is a recognised risk factor for cancer of the head and neck. The association between Epstein-Barr virus infection and the development of nasopharyngeal cancer was first recognised in 1966.w6 More recently, HPV has attracted attention. Recent observational studies have found that this virus is a strong risk factor for the development of head and neck cancer, especially oropharyngeal cancer (63, 14 to 480), and they suggest that HPV infection—especially infection with HPV subtype 16 (HPV-16)—is an aetiological factor. The disease is thought to be sexually transmitted.

A pooled analysis of eight multinational observational studies that compared 5642 cases of head and neck cancer with 6069 controls found that the risk of developing oropharyngeal carcinoma was associated with a history of six or more lifetime sexual partners), four or more lifetime oral sex partners, and—for men—an earlier age at first sexual intercourse.

HPV related oropharyngeal carcinoma is a distinct disease entity. Patients are younger (usually 40-50 years old), often do not report the usual risk factors of smoking or high alcohol intake, and often present with a small primary tumour and large neck nodes. This may lead to delayed diagnosis.

Other risk factors

Other risk factors identified from pooled analyses of case-control studies include sex (men are more likely to have head and neck cancer than women),7 a long duration of passive smoking (odds ratio for >15 years at home: 1.60, 1.12 to 2.28), low body mass index (odds ratio for body mass index 18: 2.13, 1.75 to 2.58),w8 and sexual behaviour (for example, odds ratio for cancer of the base of tongue in men with a history of same sex sexual contact: Some evidence points to a role of occupational exposure, poor dental hygiene, and dietary factors, such as low fruit and vegetable intake.w9

How does head and neck cancer present?

Patients with head and neck cancers present with a variety of symptoms, depending on the function of the site where they originate. Laryngeal cancers commonly present with hoarseness, whereas pharyngeal cancers often present late with dysphagia or sore throat. Many often present with a painless neck node. Patients with head and neck cancer can present with non-specific symptoms or symptoms commonly associated with benign conditions, however, such as sore throat or ear pain.


Box 1 lists “red flag symptoms” that practice guidelines consider to warrant urgent referral and consultation with a specialist head and neck clinician. UK guidelines specify that urgent referral should mean that patients are seen within two weeks. Head and neck centres often run a dedicated clinic—a neck lump clinic—to diagnose such patients. Box 2 describes unusual clinical scenarios in which clinicians may miss a diagnosis of cancer.

Box 1 “Red flag” symptoms and signs of head and neck cancer

Any of the following lasting for more than three weeks.
Symptoms
Sore throat
Hoarseness
Stridor
Difficulty in swallowing
Lump in neck
Unilateral ear pain

Signs

Red or white patch in the mouth
Oral ulceration, swelling, or loose tooth
Lateral neck mass
Rapidly growing thyroid mass
Cranial nerve palsy
Orbital mass
Unilateral ear effusion


Box 2 Presentations where cancer might easily be missed
Persistently enlarged neck nodes in younger patients (30-50 years)
These are often human papillomavirus related tumours. Patients often do not have the usual risk factors for head and neck cancer—they are often non-smokers and do not drink alcohol heavily. Tumours are often small or occult within normal looking tonsils. Because of patient’s young age, absence of risk factors, and unusual presentation, the problem might be confused with benign reactive nodal enlargement and the diagnosis delayed.

Persistent unilateral otalgia with no signs of ear infection in patients over 30

Patients with this problem should also be considered for early referral to a head and neck surgeon who can do a full examination of the upper aerodigestive tract with a flexible nasolaryngoscope to exclude pharyngeal and postnasal space tumours.

Recent onset wheeze in a patient over 40, usually a heavy smoker

The “wheeze” is in fact a mild biphasic stridor mistaken for wheeze and the patient, who may also have breathlessness, may be misdiagnosed as having sequelae of chronic obstructive airway disease or late onset asthma and may be treated as such. This presentation, however, may be that of a slow growing laryngeal carcinoma.
Clinicians should consider this diagnosis in patients with late onset wheeze or asthma that does not respond to drugs in patients who smoke or give a history of heavy alcohol intake.

How are suspicious lesions investigated?

A recent BMJ clinical review discussed the investigation of oral lesions in detail. Examination of any lesion of the head or neck should include palpation of the entire neck for lymph nodes, and examination of the scalp and the whole oral cavity, including tongue, floor of mouth, buccal mucosa, and tonsils. Dentures should be removed before examination. The nose and ears should also be examined, especially if no other abnormalities are found. Flexible nasolaryngoscopy allows proper examination of the nasal cavities, postnasal space, base of the tongue, larynx, and hypopharynx. Box 3 summarises the investigations that are performed in specialist care.

Box 3 Investigations used for head and neck cancer

Imaging
Computed tomography scanning from the skull base to the diaphragm is the first line investigation to assess nodal metastasis and identify the primary tumour site and tumour size
Magnetic resonance imaging is indicated for:
Oral cavity and oropharyngeal tumours; it provides better information than because of the absence of interference from dental amalgam and the better delineation of soft tissue extension
Cases where extension through the laryngeal cartilage is suspected but cannot be conclusively determined on computed tomography


Ultrasound guided fine needle aspiration performed by experienced practitioners is highly accurate and used by some centres to diagnose nodal metastasis and determine its distribution

Positron emission tomography-computed tomography scanning is used to investigate occult primary and distant metastases in some cases
Sentinel node biopsy is used to detect nodal metastases in cases with a high risk of occult metastasis. A radioactive tracer with blue dye is injected into the lesion, often a mouth cancer, and under general anaesthesia a Geiger counter locates the node with highest radioactivity, which is removed. If the node contains tumour, the patient undergoes neck dissection

Histological confirmation of diagnosis

Examination under anaesthetic and biopsy allows assessment of the size and extent of the primary tumour
Fine needle aspiration or core biopsy, often under ultrasound guidance, can provide cytological evidence of nodal metastasis

Diagnosis of the cancer is confirmed on histology of the biopsy from the primary site. The new technique of fusion positron emission tomography-computerised tomography has become one of the most important diagnostic tools for head and neck cancers. It combines normal computed tomography scanning with functional imaging using 18F-fluorodeoxyglucose (18F-FDG), which is taken up preferentially by cells with high metabolic activity, especially cancer cells (fig⇓).

This technique can therefore help identify occult primary tumours, which are relatively common and not detected by examination and conventional imaging. The technique may also have a role in the assessment of persistent nodal disease after treatment, and in the monitoring and follow-up of patients with head and neck cancer in the longer term, but sufficient evidence to support this is not yet available


Malignancy


Positron emission tomography-computed tomography scans showing: left, recurrence in neck (arrow) in a patient who had previously undergone a neck dissection and chemoradiotherapy; and right, 53 year old man with adenoid cystic carcinoma of the parotid gland showing spinal metastases in the fourth lumbar vertebra

To prove that a tumour is caused by HPV, virus specific DNA must be identified within the tumour and it must be shown that it has undergone transcription. HPV DNA can be demonstrated by polymerase chain reaction or in situ hybridisation. Transcription can be demonstrated by using immunohistochemistry to identify expression of p16, a downstream product of HPV DNA transcription.

Can we screen for head and neck cancer?

Data are available for oral cancer screening only. It is unclear whether treating premalignant lesions can prevent the occurrence of invasive cancer. A Cochrane review of randomised controlled trials of screening for oral cancer or precursor oral lesions found no strong evidence to support visual examination or other methods of screening for oral cancer in the general population.


The sensitivity of visual examination of the mouth for detecting oral precancerous and cancerous lesions varies from 58% to 94% and the specificity from 76% to 98%. These figures may be even lower for areas affected by HPV related oropharyngeal cancer, such as the tonsil and base of tongue, which are less accessible.

Randomised studies in areas of high incidence have suggested that opportunistic visual screening of high risk groups may reduce mortality. In special groups such as patients with Fanconi’s anaemia, who have a higher lifetime risk of developing head and neck cancer, it is recommended that everyone over the age of 10 is screened every four months.

Can head and neck cancer be prevented?

Prevention of head and neck cancer is closely linked to the success of tobacco control programmes. After pooling more than 50 000 sets of individual level data from case-control studies, INHANCE estimated that quitting tobacco smoking for one to four years reduces the risk of developing head and neck cancer compared with current smoking), with further risk reduction at 20 years or more ,at which time risk is similar to that of never smokers.
For alcohol use, a beneficial effect was seen only after 20 years or more of quitting (0.60, 0.40 to 0.89 compared with current drinking).

HPV related oropharygeal cancer may theoretically be prevented by vaccination against HPV-16, although no strong evidence is available to support this. Currently, most national HPV vaccination programmes include only girls, because several health economics assessments did not support the cost effectiveness of including boys. However the rapid increase in HPV related oropharygeal cancer has led some health professionals to call for a reassessment of the cost effectiveness of including boys in such programmes.

Tips for non-specialists

Refer any patient with hoarseness, stridor, swallowing problem, unilateral ear pain, lump in the neck, red or white patch or ulceration in the mouth, cranial nerve palsy, or orbital masses, to a specialist urgently
Patterns of presentation of head and neck cancer are changing, possibly because of a rise in human papillomavirus related cancers. Patients may present at younger age—30-50—with an isolated neck lump and may not give a history of smoking or heavy alcohol consumption
BMJ 2010;341

• Erythropoiesis-Stimulating Agents in Patients Undergoing Hemodialysis

• These agents might help those who are severely anemic but could cause harm in others.
• Patients with chronic renal failure and end-stage renal disease (ESRD) develop anemia, which usually responds to erythropoiesis-stimulating agents (ESAs). However, use of ESAs remains controversial because of safety issues that have arisen when these agents are used to treat patients with mild (rather than severe) anemia or when such treatment elevates hemoglobin levels too quickly or to within normal limits
Malignancy


Malignancy



Malignancy


Malignancy


Malignancy

JW Oncol Hematol Dec 11 2006

To address issues associated with anemia correction in patients with ESRD, investigators from academia and industry analyzed data from nearly 270,000 patients who received treatment at 4500 hemodialysis units. Estimated dosing of ESAs and intravenous iron was stratified within four hematocrit categories: < 30%, 30% to 32.9%, 33% to 35.9%, and 36%. From 1999 to 2006, 22.6% of patients died.

Mortality was highest during the months after patients' hematocrit levels were <30% and lowest when hematocrit levels were 36% (mortality rates, 2.1% and 0.7%, respectively). Among patients with hematocrit levels <30%, mortality was lowest in those who received the highest doses of ESAs (hazard ratio of the highest quintile of ESA doses vs. the lowest quintile, 0.94; 95% confidence interval, 0.90–0.97).

Mortality rose with increasing ESA doses in patients with hematocrit levels 33% and was highest among patients who had hematocrit levels 36% and had received the highest ESA doses (highest vs. lowest quintile of ESA doses: HR, 1.11; 95% CI, 1.07–1.15). Findings for iron administration were similar: Treatment was associated with lower mortality in those with hematocrit levels < 30%, but more-frequent iron dosing was associated with higher mortality in those with hematocrit levels 36%.

Comment: Findings from this large observational study show that use of ESAs and intravenous iron in patients who have ESRD and hematocrit levels < 30% is associated with lower mortality; conversely, higher ESA doses and more-frequent administration of iron are associated with higher mortality in those with hematocrit levels > 36%.

These results are supported by several recent studies (N Engl J Med 2010; 362:189). Raising hemoglobin and hematocrit levels in severely anemic patients by administering ESAs and iron can reduce the need for transfusions and might help prevent cardiovascular complications and death. The reasons that these agents are deleterious in patients with normal or nearly normal hematocrit levels are not fully understood.
Journal Watch Oncology and Hematology March 4, 2010

March 4, 2010 — Contralateral prophylactic mastectomy (CPM) is associated with a small survival benefit in a subgroup of women with breast cancer. This association was primarily observed in women younger than 50 years with early-stage estrogen-receptor (ER)-negative breast cancer, according to a report published online February 25 in the Journal of the National Cancer Institute.
Contralateral Prophylactic Mastectomy May Benefit Small Subgroup


"We did identify a small group in which a benefit for CPM could be demonstrated," explained author George Chang, MD, MS, assistant professor in the Department of Surgical Oncology at the University of Texas M.D. Anderson Cancer Center in Houston.
For the subgroup showing a benefit, the 5-year adjusted breast cancer survival rate increased by 4.8% in those who underwent CPM.

The authors note that a combination of factors appears to create "optimal conditions" in which to consider CPM. These include a high absolute lifetime risk for contralateral breast cancer, a lack of available chemoprevention options, and a low risk for death from the index tumor.

On the basis of the these results, clinicians are now able to provide patients with more information about their options, Dr. Chang said.
"Physicians can now give patients further data, such as if they fall within the population that is mostly likely to benefit from CPM," he told Medscape Oncology. "There is an absolute benefit of almost 5% at 5 years for some women, and the benefit may even be higher at 10 years."

Rising Rates, Benefit Unclear

The rates of CPM have been increasing, even though most patients with unilateral tumors will not develop contralateral breast cancer during their lifetimes. Nevertheless, the overall rate of CPM more than doubled from 1998 to 2003 (J Clin Oncol. 2007;25:5203-5209). As previously reported by Medscape Oncology, the number of women with unilateral ductal carcinoma in situ who undergo CPM in the United States also markedly increased from 1998 to 2005.
The rising rates are both patient and provider driven, Dr. Chang told Medscape Oncology.

Factors Influencing Benefit of CPM

The authors note that their observation that lower breast-cancer-specific mortality is associated with CPM in younger women might be due, at least in part, to the larger absolute lifetime risk for metachronous contralateral breast cancer combined with a low probability of competing causes of death. "For older women, such as [those older than] 60 years, there is a greater chance of comorbidities," said Dr. Chang. "The risk of dying from a different comorbidity may outweigh the risk of dying from another breast cancer."

Among patients with advanced disease, the data suggest that the risk for death from a potential contralateral tumor is outweighed by the mortality risk from their initial breast cancer. "That isn't to say that none of these patients will benefit from CPM," he said, "but we did not see a survival benefit for this group."

The findings are also consistent with the established role of antiestrogen therapy in reducing the risk for contralateral breast cancer. Although the authors note that their analysis could not directly incorporate the use of antiestrogen therapies, the finding that ER-positive women had a 50% reduction in the rate of subsequent contralateral breast cancer, compared with ER-negative patients, "is consistent with the known clinical benefits of tamoxifen therapy."

Improved Survival Limited to Subgroup

Dr. Chang and colleagues used the Surveillance, Epidemiology, and End Results database to identify 107,106 women with breast cancer who had undergone mastectomy between 1998 and 2003. A subset of 8902 women who underwent CPM during this same time period was also identified. The researchers then estimated the association of CPM with breast-cancer-specific survival, with further analyses by age, disease stage, and ER status.


In a univariate analysis, CPM was associated with improved disease-specific survival for women with stages I to III breast cancer (hazard ratio [HR] for death, 0.63; 95% confidence interval [CI], 0.57 - 0.69; P < .001). Risk-stratified analysis showed that this association was due to a reduction in breast-cancer-specific mortality among patients between the ages of 18 and 49 years with stages I or II ER-negative cancer (HR for death, 0.68; 95% CI, 0.53 - 0.88; P = .004).

The 5-year adjusted breast cancer survival for women in this subgroup improved with CPM, compared with those who did not undergo the procedure (88.5% vs 83.7%). Conversely, the authors did not find a reduction in breast-cancer-related death associated with CPM in any of the subgroups of women older than 60 years.

Among women 50 to 59 years of age, CPM was associated with improved breast-cancer-specific survival in those with early-stage ER-negative disease (HR for death, 0.66; 95% CI, 0.45 - 0.97; P = .04) and with later-stage ER-positive disease (HR for death, 0.54; 95% CI, 0.32 - 0.92; P = .02). These findings, the authors note, most likely "reflect the mixed effects of a true association with CPM and unexplained model variance that are caused by differences in the health status among women in this group."

Dr. Chang emphasized that because this was an observational study and not a randomized trial, "a causal relationship between survival and CPM cannot be proved, so we cannot say that there was a benefit or that there was no benefit with CPM."
It is highly unlikely that a randomized trial will be conducted, he pointed out. "But in a large observational study such as this one, we can still show associations with CPM and control for confounders.

American Geriatrics Society (AGS) 2010 Annual Scientific Meeting PSA Screening Performed Too Frequently in Frail, Elderly Men

May 14, 2010 (Orlando, Florida) — More than half of the elderly men in the United States who should not be screened for prostate-specific antigen (PSA) are, in fact, being evaluated, according to researchers reporting here at the American Geriatrics Society 2010 Annual Scientific Meeting.
After analyzing the frequency of PSA screening in elderly men across the country, Cynthia So, a third-year medical student, and her mentor, Louise Walter, MD, associate professor, Department of Medicine, Division of Geriatrics, at the University of California at San Francisco, determined that there are multiple independent predictors of excessive screening.

"PSA is not recommended for very elderly men who are ill," Dr. Walter told Medscape Internal Medicine. "We started this study to determine if and why it was happening at such a substantial rate and to identify ways to intervene in the process."
The team performed a cross-sectional study of 104 Veterans Affairs (VA) hospitals, which resulted in a population of 622,262 men older than 70 years of age who were eligible for PSA screening. Facilities were categorized by "region, institutional incentives for primary care performance measures, and location in a hospital referral region [HRR] with a high PSA screening rate," Ms. So told meeting attendees.

Of the facilities, 25% were in the western United States, 41% had primary care performance incentives, and 11% were in a high screening HRR. The lowest screening rate was 33% and the highest was 87% (median, 52%). "Even among patients over 80 years of age with a Charlson score of 4 or more, screening rates ranged between 17% and 72%," said Ms. So.

Furthermore, she noted, more screenings were done in VA hospitals with primary care incentives (46%) than in institutions without incentives (27%; P = .05). The team used multivariate analyses to determine that patient characteristics, region, and location outside of a high-screening HRR were all independent predictors of lower PSA screening rates.


The volume of screenings is of particular concern, Dr. Walter and Ms. So said during an interview with Medscape Internal Medicine at the meeting, because PSA screenings in elderly, frail men can do more harm than good.
"In older men, you have a very high percentage with benign prostatic hypertrophy, which can cause high PSA, or you can find cancers that are not going to affect the patient during their lifetime," explained Ms. So. She pointed out that the stress and subsequent treatments can actually shorten the life expectancy of elderly, frail men.

If you have people whose life expectancy is nowhere near 10 years, then you have no benefit and all the harm," added Dr. Walter. She noted that the highest screening rates existed in hospitals that tended to perform PSA tests as part of routine testing, rather than on an individualized basis, and that these hospitals also had quality indicators and performance assessments that responded positively to increased testing volumes.

"There is tremendous concern about the overutilization of testing in elderly people," said Daniel Berlowitz, MD, MPH, professor of health policy and management at the Boston University School of Public Health, in Massachusetts, and director of the Center for Health Quality, Outcomes and Economic Research. This situation "is a prime example of elderly men getting tested for a condition for which there is no evidence of benefit from future interventions."

Ms. So cautioned that age alone should not be the determining factor in the decision to perform PSA testing. "We have to take into consideration how healthy they are and their life expectancy," she said, noting that age and comorbidities were both evaluated in her analysis.

"This is not just a VA problem," Dr. Walter emphasized. "We have tested both inside and outside the VA, and it is a universal issue."
All 3 researchers recommended that in the elderly and frail male population — a "major component" of the population being screened — special care should be taken to analyze patients' situations individually and to make sure that "patient preferences [for PSA screening] be informed preferences."
American Geriatrics Society (AGS) 2010 Annual Scientific Meeting:. Presented May 13, 2010.

Risk for False-Positive Lung Cancer Screening Is Substantial

Nine percent of patients with false-positives on computed tomography required invasive testing.
Despite lack of proven efficacy, lung cancer screening has been widely promoted. In the ongoing National Lung Screening Trial (NLST), about 50,000 current or former smokers have been randomized to screening by either chest x-ray or computed tomography (CT).
In a preparative feasibility study for the NLST, 3318 current or past smokers (age range, 55–74) were randomly assigned to receive annual chest radiography or annual low-dose CT.

A false-positive was defined as a positive test result followed by a negative completed work-up or no diagnosis of lung cancer within 12 months. After two screening tests 1 year apart, risk for a false-positive result was 15% for participants in the chest radiography group and 33% for those in the CT group. Four percent of those with false-positive chest radiographs and 7% of those with false-positive CT scans underwent at least one invasive procedure as a result. Overall, 1% to 2% of participants had true-positive test results.

Comment: Lung cancer screening with CT scan (or chest radiography) has theoretical but unproven benefits and substantial proven harms. If screening is eventually proven to reduce lung cancer–specific mortality, we will need to understand better the ramifications of false-positive results.
Journal Watch General Medicine May 25, 2010


Screening for early detection of lung cancer

I give an annual lecture to first year medical students on what makes a good screening test. One of the hardest points to get across is that early detection does not necessarily lead to improved outcomes. Why isn’t it the case, they ask, that
finding cancer early is not always better than finding it later?
It’s so counterintuitive.

To answer I cite the randomised controlled trials (RCTs) done in the 1970s that tested chest x ray pictures and sputum cytology as screening tests for lung cancer. Despite the fact that the screening tests found many asymptomatic lung cancers, none of the trials decreased mortality rates among the smokers who were screened, in comparison with controls.
We would never have known this without RCTs, given that early detection nicely increased “survival times
” (but really just the length of time that patients knew they had the disease).

RCTs have always been seen as the gold standard in evaluating screening tests because they eliminate many of the biases that taint uncontrolled observational studies. As a result of the RCTs of lung cancer screening, it was with some confidence that the US Preventive Services Task Force and other evidence based authorities recommended against using chest radiography to screen for lung cancer.

As opposed to many other preventive services, here we had direct evidence on whether a screening test worked or not.
A more recent analysis of a huge screening trial for prostate, lung, and colorectal cancer has confirmed that
screening chest radiography doesn’t effectively reduce lung cancer mortality.

But lung cancer remains a critical problem. It is the leading cause of cancer deaths among men and (since they started smoking more) women. It is estimated that more than 157 000 Americans will die from lung cancer this year. It kills more people each year than do cancers of the breast, prostate, and colon combined.
Unlike survival from most other cancers, lung cancer survival has seen no significant improvement over the past 30 years. Up to 85% of patients with lung cancer die from their disease.

The good news is that this may be about to change.

On 4 November the US National Cancer Institute announced that it had terminated its national lung screening trial (NLST) early because of positive results. The number of deaths from lung cancer was
20% fewer among heavy smokers who were screened annually three times with low dose helical computed tomography (CT) than among those screened with conventional chest radiography. This RCT was huge news in the United States, making the front pages of national newspapers despite being released just two days after our elections.


Hints about the effectiveness of CT screening for lung cancer have been appearing over the past 10 years, during which several single arm CT screening studies were published. Although their study designs did not permit a reliable assessment of the effect of the screening on cancer mortality, it was clearly a promising technology.

CT’s cross sectional views reduce the problem of overlying structures obscuring details, which plagues regular chest radiographs. Their improved contrast allows more subtle abnormalities to be identified. Treatments may have improved in recent years as well, although patients in the NLST trial did not get specialised care; they were referred for routine treatment once their cancers were diagnosed.

To their credit, everyone connected with the press release was careful to add caveats to the big news. This was just a preliminary press report, they said. The final analyses had not yet been done, let alone published in peer reviewed journals. The study applied only to heavy (30 or more packs a year) smokers aged 55 or older. And it seemed to be required to add that smoking cessation is still the most effective and proved way to prevent lung cancer. How refreshing!
There are indeed plenty of questions left to ask. Around a quarter of the patients enrolled in the trial had a positive scan result, the vast majority of which were false positives.

Given that heavy smokers will have all kinds of non-cancerous changes in their lungs that will be picked up by CT, the costs of screening in terms of worry, follow-up testing, and side effects will be high. At up to $300 (£190; €220) a scan, the dollar costs will be high as well. Then there is the radiation exposure. Though they produce only 25% of the radiation of a diagnostic scan, low dose CT scanning still has much more ionising radiation than x rays. What is the cumulative radiation risk of seven, or 10, or 20 annual scans?

On the other hand, the efficacy of screening CT for lung cancer may be even greater than the 20% announced last week. The study was stopped early. Longer follow-up would likely have resulted in more deaths in the control group. Also, the study provided only three annual scans. What would have happened if more scans were done? And then there is the matter of all cause mortality, which was reduced by 7% in the CT group.

Many of these questions will be answered soon when the formal analyses of the trial are published. Others will have to await longer follow-up. Still others will remain unanswered. But it is great news indeed to have even a preliminary report of a large, well conducted RCT of a screening test for the leading cause of cancer deaths that led to a significant reduction in mortality.

• Colorectal cancer screening programmes have now been introduced or are about to be launched in many European countries, Australia, and New Zealand. In the United States, screening for colorectal cancer has long been actively promoted by the medical community and patient organisations.

Which tool is best for colorectal cancer screening?

Evidence from randomised controlled trials has been an absolute requirement for introducing new techniques in many fields of medicine, and screening is no exception.1 Until recently, the only screening tool that has been proved to be effective in reducing mortality from colorectal cancer is faecal occult blood testing.2

The results of the UK Flexible Sigmoidoscopy Screening Trial were published recently.3 This multicentre randomised controlled trial investigated the effect of one off sigmoidoscopy screening on the incidence of and mortality from colorectal cancer. More than 170 000 people aged 55-64 years were randomised to flexible sigmoidoscopy screening or no screening.


The screening took place at 14 centres throughout the United Kingdom between 1994 and 1999, and 71% of people who were invited attended the screening. In the intention to treat analysis, after 11 years of follow-up people invited to screening had a significantly reduced incidence of colorectal cancer (absolute difference 35 cases; hazard ratio 0.77, 95% confidence interval 0.70 to 0.84) and mortality from colorectal cancer (absolute difference 14 deaths; 0.69, 0.59 to 0.82).

In people who actually attended the screening (per protocol analysis) the incidence was reduced further (absolute difference 49 cases of colorectal cancer; 0.67, 0.60 to 0.76), as was mortality (absolute difference 19 deaths from colorectal cancer; 0.57, 0.45 to 0.72). The effect was apparent in both men and women.

The UK trial illustrates the value of long term publicly funded medical research. The study was designed in the early 1990s, and the main results are available almost 20 years later. Many people argue that medicine is developing so rapidly that a trial of this duration would be outdated by the time the results are available. This landmark study shows that this is a false assumption. It is important that large funding organisations like the NHS, the European Union, and others support long term clinical trials that tackle important health problems beyond the often short term scope of industry funded medical research.

Some countries already recommended endoscopic screening when the UK trial was in its recruitment phase. Endoscopic screening has been adopted for many years in the US despite the lack of randomised trials. During the past couple of years, such screening has been viewed more critically because of the lack of evidence for efficacy and effectiveness in randomised trials. This clearly indicates that high quality randomised trials must underpin any cancer screening programme.

The Norwegian equivalent of the UK trial (NORCCAP, Norwegian Colorectal Cancer Prevention), published in the BMJ in 2009, found no significant effect on the incidence of colorectal cancer.6 Mortality from colorectal cancer was not significantly different in the intention to screen analysis but was significantly reduced in people who attended screening. The most likely reason for the lack of effect on incidence is the short follow-up of seven years, compared with 11 years in the UK trial.6 Reassuringly, the data on mortality in the two trials are comparable.

Yearly hazard rates are useful for understanding the dynamic process that occurs over time after a screening intervention. The yearly hazard for the incidence of distal colorectal cancer in the UK trial shows a sustained reduction in the incidence after screening over the 11 year follow-up. Colorectal cancer screening guidelines usually recommend flexible sigmoidoscopy with a five year screening interval.7 In light of the UK trial, longer screening intervals should be recommended.

Further follow-up of the trial would provide insight into the incidence beyond 11 years. However, this may be challenging because of the introduction of the NHS bowel cancer screening programme, which may lead to gradual contamination of the control group.
The observed effect on colorectal cancer incidence and mortality in the UK trial resulted from an effect on cancers in the distal colon. Incidence and mortality for proximal cancers were not reduced.3

This can be explained by the high threshold for follow-up of distal screen detected lesions using colonoscopy in the UK trial. The three other large scale randomised trials still in progress (in Italy, Norway, and the US) have lower thresholds for colonoscopy. Thus, in these trials, an effect of screening may be expected also for the proximal colon.

The table shows efficacies for the three most commonly used CRC screening tests. Flexible sigmoidoscopy is more effective in reducing colorectal cancer mortality than faecal occult blood testing and has a profound effect on incidence, which faecal occult blood testing does not. Another advantage of flexible sigmoidoscopy is the long interval between screening. An obvious disadvantage is the invasiveness and the need for bowel cleansing. No data from randomised trials on colonoscopy screening are available yet, but two randomised trials are in progress.

The UK trial provides valid and robust evidence for the efficacy of flexible sigmoidoscopy screening. The effectiveness of such screening in the general population is still uncertain, however, because the UK trial excluded people who did not explicitly express their wish to be randomised. The NORCCAP trial is the only study of flexible sigmoidoscopy screening that is truly population based and will provide an estimate for effectiveness after 10 years of follow-up in 2013.


However, compliance with screening and preferences for different screening tests differ between populations. Therefore, flexible sigmoidoscopy should be introduced into existing screening programmes in a randomised fashion, enabling head to head comparison with the standard screening test used. This is the only way to obtain valid effectiveness data in particular populations.

ABSTRACT

Background Nilotinib has been shown to be a more potent inhibitor of BCR-ABL than imatinib. We evaluated the efficacy and safety of nilotinib, as compared with imatinib, in patients with newly diagnosed Philadelphia chromosome–positive chronic myeloid leukemia (CML) in the chronic phase.
Nilotinib versus Imatinib for Newly Diagnosed Chronic Myeloid Leukemia
June17, 2010, NEJM

Methods In this phase 3, randomized, open-label, multicenter study, we assigned 846 patients with chronic-phase Philadelphia chromosome–positive CML in a 1:1:1 ratio to receive nilotinib (at a dose of either 300 mg or 400 mg twice daily) or imatinib (at a dose of 400 mg once daily). The primary end point was the rate of major molecular response at 12 months.

Results At 12 months, the rates of major molecular response for nilotinib (44% for the 300-mg dose and 43% for the 400-mg dose) were nearly twice that for imatinib (22%) (P<0.001 for both comparisons). The rates of complete cytogenetic response by 12 months were significantly higher for nilotinib (80% for the 300-mg dose and 78% for the 400-mg dose) than for imatinib (65%) (P<0.001 for both comparisons). Patients receiving either the 300-mg dose or the 400-mg dose of nilotinib twice daily had a significant improvement in the time to progression to the accelerated phase or blast crisis, as compared with those receiving imatinib (P=0.01 and P=0.004, respectively).

No patient with progression to the accelerated phase or blast crisis had a major molecular response. Gastrointestinal and fluid-retention events were more frequent among patients receiving imatinib, whereas dermatologic events and headache were more frequent in those receiving nilotinib. Discontinuations due to aminotransferase and bilirubin elevations were low in all three study groups.
Conclusions Nilotinib at a dose of either 300 mg or 400 mg twice daily was superior to imatinib in patients with newly diagnosed chronic-phase Philadelphia chromosome–positive CML

Predicting Cure in Patients with Hodgkin Lymphoma

The number of infiltrating tissue macrophages correlated strongly with treatment outcomes in classic Hodgkin lymphoma.
Most patients with early- and advanced-stage classic Hodgkin lymphomas (HLs) are cured by present-day combination chemotherapy regimens or chemotherapy plus involved-field radiation.

Finding reliable markers that help predict likelihood of response to therapy might allow clinicians to individualize patients' treatments. With this goal, investigators utilized gene expression profiling to identify biomarker signatures in lymph node samples from 130 HL patients and then retrospectively correlated those signatures with patients' outcomes.

The presence of gene expression signatures characteristic of tumor-associated macrophages correlated highly with treatment failure. This finding was validated in a second, independent cohort of 166 HL patients. In this second analysis, investigators assayed diagnostic biopsy samples with an anti-CD68 immunohistochemical stain to identify and semiquantitatively score tissue macrophages.


They found a strong direct correlation between higher numbers of infiltrating macrophages and increased risk for both primary and second-line treatment failures, including autologous stem-cell transplantation. Disease-specific survival at 10 years was 100% among patients who had early-stage HLs and low numbers of infiltrating macrophages.

Comment: Recent clinical research of HL has focused largely on shortening treatment duration and lowering intensity with the goal of minimizing short- and long-term toxicities while preserving high cure rates. If the current findings are confirmed in prospective studies, the prognostic utility of staining for CD68 — a marker already widely available — will enhance our ability to risk-stratify individual patients and to adapt therapy accordingly. This study also provides clues to potentially important interactions between malignant Reed-Sternberg cells (the distinguishing cells in HL) and the tumor microenvironment, which promise to improve the understanding of HL pathogenesis and treatment resistance.
Journal Watch Oncology and Hematology March 10, 2010

ABSTRACT

Background Treatment with dasatinib, a highly potent BCR-ABL kinase inhibitor, has resulted in high rates of complete cytogenetic response and progression-free survival among patients with chronic myeloid leukemia (CML) in the chronic phase, after failure of imatinib treatment. We assessed the efficacy and safety of dasatinib, as compared with imatinib, for the first-line treatment of chronic-phase CML..
Dasatinib versus Imatinib in Newly Diagnosed Chronic-Phase Chronic Myeloid Leukemia
Malignancy

NEJM June 17, 2010

Methods In a multinational study, 519 patients with newly diagnosed chronic-phase CML were randomly assigned to receive dasatinib at a dose of 100 mg once daily (259 patients) or imatinib at a dose of 400 mg once daily (260 patients). The primary end point was complete cytogenetic response by 12 months, confirmed on two consecutive assessments at least 28 days apart. Secondary end points, including major molecular response, were tested at a significance level of 0.0001 to adjust for multiple comparisons.

Results After a minimum follow-up of 12 months, the rate of confirmed complete cytogenetic response was higher with dasatinib than with imatinib (77% vs. 66%, P=0.007), as was the rate of complete cytogenetic response observed on at least one assessment (83% vs. 72%, P=0.001). The rate of major molecular response was higher with dasatinib than with imatinib (46% vs. 28%, P<0.0001), and responses were achieved in a shorter time with dasatinib (P<0.0001).

Progression to the accelerated or blastic phase of CML occurred in 5 patients who were receiving dasatinib (1.9%) and in 9 patients who were receiving imatinib (3.5%). The safety profiles of the two treatments were similar.

Conclusions Dasatinib, administered once daily, as compared with imatinib, administered once daily, induced significantly higher and faster rates of complete cytogenetic response and major molecular response. Since achieving complete cytogenetic response within 12 months has been associated with better long-term, progression-free survival, dasatinib may improve the long-term outcomes among patients with newly diagnosed chronic-phase CML.

Paraneoplastic Neurological Syndromes: Prospective Case Series

A large database study provides new understanding of these conditions.
To gain insight into the clinical and immunological associations seen in paraneoplastic neurological syndromes (PNSs), researchers reviewed a database with prospectively obtained information on 979 patients with PNSs from 20 centers, of whom 968 had a definite PNS according to previously defined criteria (J Neurol Neurosurg Psychiatry 2004; 75:1135).


Most patients (90.4%) had a unifocal PNS; cerebellar degeneration and sensory neuronopathy were the most common. Only 9.6% of patients had a multifocal PNS. Excluding patients with anti-Hu–associated encephalomyelitis, the most frequent multifocal presentation was limbic encephalitis associated with another syndrome. In 65% of patients, diagnosis of the neurological syndrome preceded the cancer diagnosis.

Several previously identified features of PNSs were also apparent in this series. Solid tumors were more common than hematologic disorders; small cell lung cancer, cancers of the breast and ovary, and non–small cell lung cancer were the cancers most frequently associated with PNSs; and patients with PNSs often had limited or local neoplastic disease when cancer was diagnosed. Well-described onconeural antibodies were found in almost 70% of patients; the Hu antibody was the most frequent.

Onconeural antibodies were not found in 18.3% of the patients, reinforcing the concept that the absence of antibodies does not rule out the diagnosis of a PNS. The authors note that information about the recently described antibodies to cell-surface antigens was not captured and would likely account for some of the antibody-negative cases. Of the 403 patients who died, 150 died of tumor progression and 109 of the PNS. Dysautonomia had the worst prognosis of all the PNSs.

Comment: Paraneoplastic neurological syndromes are increasingly recognized but remain incompletely understood. Many case series are small and retrospective and do not include comprehensive information. This collection of standardized clinical data of paraneoplastic cases is an important resource and demonstrates the value of such collaborative efforts. For example, the finding that more patients with PNSs die from neoplastic causes than from the PNS, and the poor prognosis associated with dysautonomia, would not likely have been found in smaller series.
Journal Watch Neurology June 8, 2010

Positron Emission Tomography/Computed Tomography in Paraneoplastic Neurological Disorders

This imaging modality should be the initial diagnostic test for patients with these disorders, but follow-up biopsy is mandatory.
Positron emission tomography/computed tomography (PET/CT) has revolutionized both diagnosis and treatment of cancer (J Nucl Med 2010; 51:401 and Lancet Oncol 2010; 11:92). The technique combines CT examination of lesion size with imaging of glucose uptake on PET.

In patients with paraneoplastic neurological disorders, PET alone is known to be superior to CT alone for identifying occult cancers (JW Neurol Jan 10 2002). Now, researchers have examined the value of PET/CT in 56 patients with suspected paraneoplastic syndromes who were retrospectively identified as having undergone such imaging after initial test results (including CT alone) were negative.

PET/CT results were positive in 22 of the patients. Of these, 10 patients had biopsy-confirmed cancer: 7 of 13 patients with autoantibodies strongly suggestive of cancer and 3 of 26 patients with autoantibodies sometimes associated with cancer. (No patient without autoantibodies had biopsy-confirmed cancer.) How many of the patients with initially negative results will subsequently prove to have cancer is unknown.

Comment: PET/CT, when available, should be the initial diagnostic test in a patient with a suspected paraneoplastic syndrome. However, false positives and false negatives can occur. If PET/CT findings are positive, biopsy is mandatory, because noncancerous inflammatory (e.g., sarcoid) lesions may increase glucose uptake. A negative PET/CT finding does not rule out an underlying cancer; a search using other modalities (e.g., magnetic resonance imaging or ultrasound) and repeat PET/CT after several months may both be necessary.

One additional caveat applies: Finding a cancer, particularly one not generally associated with the identified antibody, does not unequivocally identify that cancer as the cause of the paraneoplastic disorder. A careful search may identify another cancer (J Neurooncol 2006; 78:49). Probing the lesion for the presence of the antigen recognized by the antibody is the only way to unequivocally identify a cancer as causal.


A third of all cancers in the UK are potentially
preventable, finds review
Jacqui Wise
London
A third (more than 100 000 cases) of all cancers in the United Kingdom are caused by just four risk factors and are potentially preventable, concludes a comprehensive review of the evidence.
The researchers estimated that 106 845 cancers in the UK in 2010 were associated with smoking, poor diet, alcohol, and excess weight. And when all 14 lifestyle and environmental risk factors were included, this figure rose to 134 000 or (43% of the total).
6 December 201

The review, published as a supplement in the British Journal

of Cancer found that 45% of all cancers in men and 40% in women could be prevented. The review looked at all the available evidence together with the latest (2010) estimates of cancer incidence.
The study’s lead author, Max Parkin, a Cancer Research UK epidemiologist who is based at Queen Mary college, University of London, said, “Leading a healthy lifestyle won’t guarantee you won’t get cancer, but doing so can greatly stack up the odds in your favour.” He added: “Nine out of 10 lung cancer cases can be prevented. Half of all colorectal cancers are due to the main four risk factors.”

The most important lifestyle risk factor for men and for women is smoking—causing 23% of cancers in men and 15.6% in women. Harpal Kumar, chief executive of the Cancer Research Campaign, said, “Smoking is still the biggest priority to tackle in terms of cancer prevention. The rates did come down substantially but have now plateaued, so we need to do much more. We need to get the message across that smoking is not just a risk factor for lung cancer but other cancers too.”

For women, being overweight was shown to have a greater effect than drinking alcohol. The percentage of cancers inwomen linked to overweight and obesity was 6.9%, double the 3.3% for alcohol. “Being overweight is a clear risk factor for breast cancer, and because breast cancer is so common, that makes it higher in the ranking,” said Professor Parkin.
Infections such as human papillomavirus were linked to 3.7% of cancers in women, excessive sun exposure and sunbeds to 3.6%, and lack of fruit and vegetables to 3.4%.

For men, the next biggest risk factor, after smoking, was a lack of fruit and vegetables, at 6.1%. Occupational risks, such as
exposure to asbestos, was linked to 4.9% of cancers in men.
Alcohol was linked to 4.6% of cancers and being overweight or obese to 4.1%.


“Like most healthcare systems in the world we focus adisproportionate amount on treatment rather than prevention,’
said Dr Kumar. “If we could prevent 134 000 cases of cancer a year that would be an enormous saving for the NHS.”

In a foreword to the report, Richard Peto, a leading expert on deaths attributable to tobacco, writes, “Each of these four main strategies for cancer control would also substantially reduce the burden of other non-communicable diseases, particularly
cardiovascular, diabetic, renal, and hepatic disease.”
Professor Peto concludes: “This supplement will help focus the attention of researchers, individuals, and policy makers on the relative importance of the currently known causes of cancer.”
Cite this as: BMJ 2011;343:d7999
© BMJ Publishing Group Ltd 2011


Malignancy


Malignancy


Malignancy




Malignancy


Malignancy






رفعت المحاضرة من قبل: Abdalmalik Abdullateef
المشاهدات: لقد قام 6 أعضاء و 81 زائراً بقراءة هذه المحاضرة








تسجيل دخول

أو
عبر الحساب الاعتيادي
الرجاء كتابة البريد الالكتروني بشكل صحيح
الرجاء كتابة كلمة المرور
لست عضواً في موقع محاضراتي؟
اضغط هنا للتسجيل