What are some ways to limit lipid peroxidation in the body?


Answer: Certain foods can either help or prevent antioxidants from doing their job. Fruits, vegetables, and other plant-foods like beans and seeds have polyphenols (antioxidants) known to reduce oxidative stress and lipid peroxidation. The best way to ensure a healthy antioxidant environment is to boost consumption of whole plant foods and go easy on added oils. Products knows to disrupt the mechanisms that quench free radical damage are red and processed meats.

 

Nocturnal blood pressure and cardiovascular disease: a review of recent advances.


The accurate measurement, prediction and treatment of high blood pressure (BP) are essential issues in the management of hypertension. Ambulatory blood pressure monitoring (ABPM) has been shown to be superior to clinic BP measurements as ABPM can provide the following important information: (i) the mean BP levels, (ii) the diurnal variation in BP and (iii) the short-term BP variability. Among these parameters, there is increasing evidence that the mean nocturnal BP level is the most sensitive predictor of cardiovascular morbidity and mortality. Furthermore, several studies have shown that less nocturnal BP dipping, defined as less nocturnal BP decline relative to daytime BP, or a high night–day BP ratio was associated with poor prognosis irrespective of the 24-hour BP levels. These findings can be interpreted in at least two ways: namely, high nocturnal BP or less nocturnal BP dipping might be not only a potent risk factor for cardiovascular disease (CVD), but also a marker of pre-existing or concurrent diseases that can lead to nocturnal BP elevation. In this review, we consider the clinical utility of ABPM and in particular focus on the nocturnal BP levels or nocturnal BP dipping as a potent risk factor for CVD. In addition, the clinical management of high nocturnal BP and blunted nocturnal BP dipping with antihypertensive medications is discussed.

Source: Hypertension Research/nature.


 

Losartan/hydrochlorothiazide combination vs. high-dose losartan in patients with morning hypertension—a prospective, randomized, open-labeled, parallel-group, multicenter trial.


The treatment of morning hypertension has not been established. We compared the efficacy and safety of a losartan/hydrochlorothiazide (HCTZ) combination and high-dose losartan in patients with morning hypertension. A prospective, randomized, open-labeled, parallel-group, multicenter trial enrolled 216 treated outpatients with morning hypertension evaluated by home blood pressure (BP) self-measurement. Patients were randomly assigned to receive a combination therapy of 50 mg losartan and 12.5 mg HCTZ (n=109) or a high-dose therapy with 100 mg losartan (n=107), each of which were administered once every morning. Primary efficacy end points were morning systolic BP (SBP) level and target BP achievement rate after 3 months of treatment. At baseline, BP levels were similar between the two therapy groups. Morning SBP was reduced from 150.3±10.1 to 131.5±11.5 mm Hg by combination therapy (P<0.001) and from 151.0±9.3 to 142.5±13.6 mm Hg by high-dose therapy (P<0.001). The morning SBP reduction was greater in the combination therapy group than in the high-dose therapy group (P<0.001). Combination therapy decreased evening SBP from 141.6±13.3 to 125.3±13.1 mm Hg (P<0.001), and high-dose therapy decreased evening SBP from 138.9±9.9 to 131.4±13.2 mm Hg (P<0.01). Although both therapies improved target BP achievement rates in the morning and evening (P<0.001 for both), combination therapy increased the achievement rates more than high-dose therapy (P<0.001 and P<0.05, respectively). In clinic measurements, combination therapy was superior to high-dose therapy in reducing SBP and improving the achievement rate (P<0.001 and P<0.01, respectively). Combination therapy decreased urine albumin excretion (P<0.05) whereas high-dose therapy reduced serum uric acid. Both therapies indicated strong adherence and few adverse effects (P<0.001). In conclusion, losartan/HCTZ combination therapy was more effective for controlling morning hypertension and reducing urine albumin than high-dose losartan.

Source: Hypertension Research/nature.

 

 

The optimal timing of antihypertensive medication administration for morning hypertension in patients with cerebral infarction.


Morning hypertension is an independent risk factor for cardiovascular diseases, particularly stroke. However, the optimal time at which to take antihypertensive medication to treat morning hypertension remains unclear. We prospectively enrolled elderly patients (over 65 years old) with morning hypertension who had suffered an ischemic stroke (or strokes). Additional treatments (one of six arms) were randomly administered for 10 weeks in the morning, in the evening or at bedtime (n=15 for each time point/medication). The patients measured their blood pressure and heart rate at home for 14 days prior to the intervention and for the final 14 days, and recorded the data in a blood pressure diary. The patients’ urinary albumin/creatinine ratios were evaluated before and after the 10-week intervention. A total of 270 patients were enrolled in this study (mean age: 75.6±5.8 years; female/male ratio: 125/145). Their morning and evening systolic blood pressures were significantly decreased after following any of the study medication dosing schedules (P<0.001). However, the reductions in the differences between the morning and evening systolic blood pressures were significant only when the medication was taken in the evening or at bedtime (P<0.001 with repeated measures analysis of variance). Furthermore, the recovery rate from morning hypertension was also higher when the medication was taken in the evening (40.0%) or at bedtime (45.6%), rather than in the morning (22.2%; P=0.003 with the χ2-test). Antihypertensive medication taken in the evening or at bedtime is the most effective in treating morning hypertension when the patient adheres to the medication regimen.

Source: Hypertension Research/nature.

 

 

 

Silent brain infarct is independently associated with arterial stiffness indicated by cardio-ankle vascular index (CAVI).


It is still unclear whether silent brain infarct (SBI) and white-matter hyperintensities (WMHs) on magnetic resonance imaging (MRI) scans are associated with cardio-ankle vascular index (CAVI), a novel parameter of arterial stiffness. We studied 220 consecutive patients (mean age, 69 years) without a history of stroke or transient ischemic attack. Patients were assessed for the presence of SBI, WMHs and risk factors. Arterial stiffness was evaluated using CAVI. Patients were categorized into one of two groups according to the presence or absence of SBI and WMHs, and clinical characteristics were compared between the two groups. CAVI was significantly higher in patients with SBI or in patients with WMHs than in those without those respective findings. The CAVI cutoff values for detection of SBI and WMHs were 9.2 and 8.9, respectively. On multivariable analyses, CAVI, a one point increase in CAVI: odds ratio (OR), 1.25; 95% confidence interval (CI), 1.01–1.56; CAVI9.2: OR, 2.34; 95% CI, 1.16–5.02, was independently associated with SBI, however, CAVI was not independently associated with WMHs. Patients with CAVI 9.2 had higher OR for the presence of both SBI and WMHs (OR, 2.57; 95% CI, 1.15–5.98) when compared with patients with CAVI <9.2 after adjustment for age and sex. SBI is independently associated with arterial stiffness indicated by CAVI.

Source: Hypertension Research/nature.

 

 

Screening (and Some Rescreening) for Aortic Aneurysms Found Cost Effective.


A Danish study published in BMJ confirms the cost effectiveness of screening for abdominal aortic aneurysms in older men and explores the benefits of rescreening in those with increased aortic diameters. (The U.S. Preventive Services Task Force recommends one-time screening for men between age 65 and 75 who have ever smoked.)

Researchers used Danish national registries to characterize a theoretical cohort of 100,000 men at age 65. They then modeled screening versus no screening — and then, among those who were initially screened, a single repeat screening at 5 years versus screening every 5 years for life.

Screening was more cost effective than no screening. Using a cost-effectiveness threshold of roughly $30,000 for each quality-adjusted life-year gained, a single rescreening after 5 years seemed optimal in those with aortic diameters of 25 to 29 mm on the initial screen.

However, the authors cite “substantial uncertainty” and call for further research on the rates of aneurysm growth and rupture.

Source: BMJ

 

First Confirmed Case of Congenital Chagas Disease in U.S.


The CDC details the first confirmed case of congenital Chagas disease in the United States in this week’s MMWR. CDC officials say that although the disease is not endemic to the U.S., there are an estimated 300,000 people here with chronic Chagas disease and as many as 638 congenital transmissions annually.

The CDC reminds clinicians of the following:

  • Pregnant women who have emigrated from Mexico, Central America, and South America who may have been infected with Trypanosoma cruzi should be identified and screened.
  • If a woman has chronic Chagas disease, the infant should be tested and, if infected, promptly treated.
  • If a mother is seropositive, all her children should be tested and treated as needed.
  • Mothers who test positive should be treated with an antitrypanosomal drug after they finish breast-feeding.

Source: MMWR

 

Pathological Complete Response and Accelerated Drug Approval in Early Breast Cancer.


New drugs for breast cancer have historically been approved first for patients with metastatic disease who have few remaining options for systemic treatment. Approval for an adjuvant indication occurs years later, after large, randomized trials with prolonged follow-up have been conducted in patients with early-stage disease. Recently, neoadjuvant trials have introduced new drugs preoperatively in patients with localized breast cancer. Such treatment aims to render locally advanced cancers operable, facilitate breast-conserving surgery, and ultimately improve survival. The rate of pathological complete response — absence of residual invasive cancer on pathological evaluation of resected breast specimens and lymph nodes after preoperative therapy — has been used as the primary end point in many neoadjuvant trials.

 

Promising investigational drugs should be incorporated into standard treatment for early-stage breast cancer as rapidly as possible to provide the greatest benefit to the most patients. But this goal must be weighed against the limited safety data available for new drugs when they are used in patients with curable cancer and uncertainty about whether improvement in pathological complete response will predict improvements in long-term disease-free or overall survival.

 

The uncertainties regarding the risks and benefits of new neoadjuvant drugs may be managed by enrolling patients who have the greatest risk of recurrence with existing therapies and are likely to benefit the most. Although modern cytotoxic regimens have reduced 10-year breast-cancer–related mortality by approximately one third, certain patients with early-stage breast cancer, particularly those with high-grade tumors that are negative for estrogen receptor (ER), progesterone receptor (PR), and human epidermal growth factor receptor 2 (HER2) (i.e., triple negative), remain at substantial risk for distant metastatic disease and death.

 

Randomized neoadjuvant trials suggest that a pathological complete response may predict disease-free or overall survival among patients with early-stage breast cancer who are treated with preoperative systemic therapy. A Cochrane meta-analysis of 5500 patients enrolled in 14 randomized trials comparing preoperative with postoperative chemotherapy showed that the risk of death among patients who had a pathological complete response was about half that of patients with residual tumor at the time of surgery.1

 

In the United States, regular approval of a new drug requires adequate, well-controlled trials demonstrating clinical benefit, which is generally defined in early-stage breast cancer as an improvement in disease-free or overall survival. Alternatively, the Food and Drug Administration (FDA) may grant accelerated approval on the basis of a surrogate end point that is “reasonably likely to predict clinical benefit.” For neoadjuvant breast-cancer treatment, we propose that the rate of pathological complete response be used as this surrogate.2 After accelerated approval, demonstration of an improvement in disease-free or overall survival would be required; the indication may be withdrawn from product labeling if confirmatory trials have not shown clinical benefit.

 

For regulatory purposes, neoadjuvant trials evaluating a new drug with limited safety data should enroll patients with high-risk features and exclude those with ER- or PR-positive tumors lacking these characteristics. Patients may be classified as having a high risk of recurrence on the basis of conventional histologic features or appropriately validated genomic measures. The highest rates of pathological complete response have generally been observed among patients with high-grade ER- and PR-negative tumors and those with HER2-positive tumors.3 Although patients with triple-negative breast cancer have an increased risk of recurrence, if a pathological complete response is achieved, the likelihood of survival may be similar to that among patients with more prognostically favorable subtypes.3,4 Patients with ER- and PR-positive tumors are less likely to have a pathological complete response to neoadjuvant therapy and more likely to live longer with available therapy; pathological complete response is thus unlikely to predict clinical benefit in this subgroup.3 We discourage enrollment of patients with low-grade ER- and PR-positive tumors into neoadjuvant trials conducted with regulatory intent.

 

A large, randomized trial of neoadjuvant breast-cancer treatment using an add-on design — studying a standard adjuvant regimen with or without the new drug, all delivered preoperatively — could be used for an accelerated-approval submission. This single randomized trial, if adequately powered, could both support accelerated approval on the basis of substantial improvement in the pathological complete response rate and, with further follow-up, provide data on potential improvements in disease-free and overall survival to establish clinical benefit. Use of postoperative systemic therapy should be avoided but if needed (e.g., for completion of a year of adjuvant trastuzumab for HER2-positive breast cancer) should be consistent in both treatment groups to avoid confounding interpretations of disease-free and overall survival. Demonstration, with mature data, of a clinically and statistically significant improvement in disease-free or overall survival would fulfill the requirements for regular approval and permit continued marketing of the drug for neoadjuvant use in breast cancer.

 

At the time of accelerated approval, characterization of long-term toxic effects may be incomplete, and uncommon adverse events may not be recognized or fully described. A comprehensive safety assessment is critical in evaluating the benefits of neoadjuvant therapy for patients with early-stage breast cancer, among whom long-term survival is common and may result solely from local therapy. There is a risk that a drug approved in this way could be marketed for a prolonged period, exposing many patients with curable disease and potentially normal longevity to the risks posed by an ultimately ineffective therapy. To mitigate this risk, randomized neoadjuvant trials conducted with marketing intent should be limited to subpopulations at high risk for recurrence despite optimal local and systemic therapies, and confirmatory trials should be ongoing at the time of accelerated approval.

 

The trial design described above would isolate the new drug’s effect and provide a larger body of safety data at the time of accelerated approval. Continued follow-up in the same trial would provide essential information on late or cumulative toxic effects, as well as mature efficacy outcome data, far more quickly than a subsequent adjuvant trial could do and would hasten clarification of the relationship between the pathological complete response rate and survival. For drugs with more extensive prior use in breast cancer or evidence of unprecedented efficacy, or for drugs being studied in ongoing randomized adjuvant trials, alternative approaches may be acceptable and should be discussed with the FDA.

 

The proposed magnitude of the difference between treatment groups in the pathological complete response rate should be prespecified and have a high likelihood of translating into a meaningful improvement in disease-free or overall survival; the sample size required to demonstrate a significant difference in survival may be substantially larger than that needed to demonstrate a significant difference in pathological complete response rates. Statistical analyses should use the full intention-to-treat population. Since distant metastatic disease will develop in some patients with a pathological complete response, small absolute improvements probably will not have a meaningful effect on long-term clinical benefit; substantial improvements may be needed to improve disease-free or overall survival. For example, in a neoadjuvant trial of chemotherapy with or without trastuzumab, the group that received trastuzumab had a near doubling of the pathological complete response rate (39% vs. 20%) and a 3-year disease-free survival rate of 71%, as compared with 56% in the other treatment group.5 Similarly, adjuvant trials of chemotherapy with or without trastuzumab have demonstrated an approximately 50% relative (12% absolute) improvement in disease-free survival when trastuzumab was added.6

 

Despite the promise of pathological complete response as an end point for accelerated approval, unresolved issues remain, including the definition of such a response that optimally predicts long-term clinical outcomes, the intrinsic breast-cancer subtypes most likely to show such a response, and the magnitude of improvement needed to produce meaningful improvements in disease-free and overall survival. In collaboration with international investigators, the FDA is conducting a meta-analysis using primary-source data from more than 12,000 patients enrolled in randomized neoadjuvant trials, aiming to identify those in whom a pathological complete response is most likely to predict clinical benefit by correlating this end point with disease-free and overall survival in intrinsic breast-cancer subtypes.

 

The FDA has released a draft Guidance to Industry,2 outlining a pathway to accelerated approval for neoadjuvant breast-cancer therapies and seeking public comments on the use of pathological complete response as an end point for accelerated approval.

Source: NEJM

 

 

Delays and Difficulties in Assessing Metal-on-Metal Hip Implants.


More than 500,000 U.S. patients have received metal-on-metal hip prostheses, most of which were implanted between 2003 and 2010. These prostheses entered the market through the 510(k) pathway at the Food and Drug Administration (FDA), whereby manufacturers need only demonstrate substantial equivalence to a device already on the market to gain approval. Unfortunately, there is now compelling evidence that these implants fail at a higher rate than hip prostheses made of other materials1; indeed, one type of metal-on-metal hip has a failure rate of nearly 50% at 6 years.2 Moreover, a number of unresolved questions related to these devices remain, including what the relationship is between serum metal ion levels and the occurrence of local and systemic adverse events, what relationship exists between serum metal levels and the need for revision surgery, how much product-level variation there is in revision rates and adverse events, and which patient- and clinician-level variables are associated with those rates and events.

 

On May 6, 2011, in response to these public health concerns, the FDA ordered manufacturers of metal-on-metal hip implants to conduct postmarket surveillance studies on their products. (The FDA can require such studies under the Food, Drug, and Cosmetic Act [FDCA] if failure would be reasonably likely to have serious adverse health consequences, if the device is expected to have considerable use in pediatric populations, if it’s intended to be implanted for more than 1 year, or if it’s intended to be a life-sustaining or life-supporting device used outside a device user facility.) The FDA directed the implant manufacturers to examine both adverse events and patients’ pre- and postimplantation levels of chromium and cobalt.3 The agency recommended a cross-sectional study design to capture data on patients from the time of initial implantation to 8 years later. It also recommended that manufacturers conduct a failure analysis to evaluate the devices that had been explanted from patients who had participated in clinical studies and all other reasonably available explants.

 

Unfortunately, data from these studies, which are funded by the manufacturers, will not be available to protect the public health anytime soon. As of June 18, 2012, the FDA and manufacturers had reached agreements on study protocols for less than one quarter of the devices, most of the study plans had not been finalized, and it was unclear whether any studies had begun. Moreover, methodologic issues will limit the usefulness of the information that emerges from these studies. These problems illustrate some of the challenges associated with postmarketing surveillance of medical devices in the United States — and are some of the fraught issues for the FDA to consider when it convenes the Orthopaedic and Rehabilitation Devices Panel of the Medical Devices Advisory Committee to discuss metal-on-metal hip implants on June 27 and 28, 2012.

 

According to the FDA’s website, which is updated monthly, just 24 of the 104 metal-on-metal hip products (23%) for which manufacturers face an active order to complete a study were categorized as having a “Study Pending,” which indicates that the FDA has approved the study plan. The studies for the remaining 80 products were listed as having either a “Plan Pending” or a “Plan Overdue.” (We exclude products with a study status of “Other” or “Terminated,” which indicate that there is not an active study for these products — because they were never marketed, for example.)

 

Several factors may contribute to delays in study initiation: the development of a scientifically sound study protocol is time-consuming and resource-intensive; manufacturers lack incentives to conduct studies that may reveal adverse information about their products; and delays at the FDA may slow down the process of finalizing study protocols. Strategies for reducing delays must address at least one of these factors.

 

Unfortunately, even when the studies proceed, limitations may constrain the amount of useful information that emerges. One significant shortcoming is that each manufacturer is permitted to conduct its own independent study on its product or products. The resulting lack of harmonization among studies will lead to challenges in pooling the data and making cross-product comparisons. For example, variations in definitions of outcomes, collection of patient- and provider-level data, and patient follow-up could undermine attempts to understand product-level differences. Similarly, companies may measure chromium and cobalt levels differently, using varied assays, laboratories, and protocols and introducing uncertainty into attempts to pool results. Although identical protocols would not necessarily be appropriate for all products, this lack of harmonization limits the public health benefit of the studies.

 

The FDCA grants the FDA the authority to ensure that “the [study] plan will result in the collection of useful data that can reveal unforeseen adverse events or other information necessary to protect the public health”; in certain instances, a requirement of study harmonization may fall within that authority. Given that many protocols for metal-on-metal–hip studies are not yet finalized, the FDA may still have an opportunity to maximize the amount of information that will be provided.

 

Another weakness is that current law prevents the FDA from requiring studies such as these to last more than 3 years for most devices. Given that hip implants are anticipated to last for 15 years, a 3-year study will neither capture all adverse events nor obtain the 8 years’ worth of data that the FDA requested for a device that only recently entered the marketplace.

 

To improve this type of study, we recommend a few key approaches. First, the FDA and manufacturers must collaborate to ensure that protocols are finalized and studies initiated as quickly as possible. If the manufacturers are causing the delay, the FDA should use its available enforcement tools — warning letters, fines, and removal from the marketplace — to demonstrate its commitment to postmarketing surveillance and provide incentives for manufacturers to complete the studies. In addition, given recent increases in the number of these studies ordered by the FDA, the agency must be sure it has sufficient numbers of staff members and the expertise to ensure that the studies use appropriate methods and are conducted promptly.

 

Second, the study methods should be harmonized to the extent possible. The FDA should explore ways of improving the coordination of studies conducted by multiple manufacturers to ensure that data can be pooled and cross-product comparisons can be made where appropriate. Coordination could be improved, in part, through creative engagement with external stakeholders. Possible models include the FDA advisory committees and efforts like the FDA-initiated International Consortium of Orthopedic Registries.

 

Third, the infrastructure needs improvement. Among other factors, the lack of comprehensive national medical-device registries, the absence of unique device identifiers, the inability of data systems to “talk” to one another easily, and inadequate funding hamper efforts to conduct postmarketing surveillance studies in the United States. Indeed, the first concerns regarding high revision rates for metal-on-metal hip implants arose out of the Australian joint registry in 2007, and additional data emerged from the National Joint Registry of England and Wales in 2010. These studies could be launched more quickly if there were an established U.S.-based infrastructure to support them. Infrastructure investments would benefit all stakeholders: manufacturers could conduct less costly postmarketing trials and use the feedback for iterative device improvements; patients and clinicians would gain information on the risk–benefit profiles of devices; and the FDA would have more confidence that adverse health effects would be quickly identified.

 

While the FDA and other stakeholders struggle with these systemic issues, the problems with metal-on-metal hip implants will continue to occupy the agency, clinicians, manufacturers, and thousands of affected patients for the foreseeable future. The upcoming advisory committee meeting would be an opportune time for the FDA to address the slow start to these studies — and to signal that substantial penalties may be assessed against any manufacturer that is responsible for delays in finalizing protocols.

Source: NEJM

 

Central-Airway Necrosis after Stereotactic Body-Radiation Therapy.


Stereotactic body-radiation therapy ( delivers large doses of radiation with millimeter accuracy.1 With SBRT, control rates for stage I non–small-cell lung cancer are 90% or greater, and this effectiveness has led to its worldwide adoption in treating patients with inoperable disease.1,2 Despite technological advances that permit the precision required for SBRT, normal tissues near the tumor receive higher biologic doses of radiation than with standard treatment. Consequently, patients with tumors adjacent to radiation-sensitive structures, such as the large airways, great vessels, heart, phrenic nerves, and spinal cord, may be at an increased risk for severe radiation injury.3 Documenting the extent of the toxic effects on these central structures represents a challenge given the competing risk of death in patients with lung cancer and the extended time required for toxicity to develop.

 

In a seminal study, patients with centralized tumors treated with a full-dose regimen of 60 to 66 Gy of radiation administered in three fractions, the risk of severe toxicity was 11 times as high as the risk of the development of peripheral tumors.3 Consequently, an SBRT “danger zone” was defined and subsequent multi-institutional trials have excluded patients with tumors in this area. A more protracted and presumably safer fractionation scheme (in which 50 Gy of radiation were administered in five fractions) has been widely adopted for the treatment of centrally located tumors and is the starting point for a dose-determination trial.4,5 Below we describe the clinicopathological features of central-airway necrosis in a patient who had received SBRT, with 50 Gy administered in five fractions, 8 months earlier.

 

A 61-year-old woman with a smoking history of 52 pack-years presented with two primary non–small-cell lung cancers: a central tumor measuring 1.4 cm in diameter (Figure 1AFigure 1

Initial Tumors and Post-SBRT Necrotic Tissue in a Patient with Non–Small-Cell Lung Cancer.) and a peripheral

tumor measuring 2.4 cm in diameter (Figure 1B). Biopsies of the tumors confirmed that both were adenocarcinomas. Staging studies revealed no metastatic disease. Poor pulmonary function precluded the performance of surgery.

 

The patient was treated with SBRT in accordance with a protocol for a registration study that allows for long-term surveillance of adverse events; the protocol was approved by an institutional review board. Dose, fractionation, technique, and constraints were established and applied in accordance with published standards.5 Acute toxicity was not observed, and the patient had an excellent radiographic response.

 

A surveillance scan obtained with the use of positron-emission tomography–computed tomography 8 months after treatment showed new mediastinal metastases, both of which were confirmed on the examination of biopsy specimens as recurrent adenocarcinomas. Incidental findings included an extensive area of necrosis in the proximal right airway (Figure 1C, 1D, and 1E) in the tissue within the radiated area. (A three-dimensional video reconstruction of the larynx, trachea, and proximal main bronchi that shows of the area of necrosis is available with the full text of this letter at NEJM.org.)

 

The patient received one cycle of treatment with pemetrexed and cisplatin before plans for salvage chemoradiotherapy were abandoned. Several weeks later hemoptysis developed, necessitating intubation. Bronchoscopy confirmed that the bleeding originated from the right proximal airway. With the consent of the family, care was transitioned to comfort-only measures, and the patient died 11 months after her original presentation.

 

This report of fatal central-airway necrosis in a patient treated with SBRT underscores the importance of long-term follow-up of patients with central tumors and the necessity of protocol-based treatment. Furthermore, it may be prudent to consider post-treatment bronchoscopic surveillance of patients with central tumors to determine the true frequency of tracheobronchial injury.

 

SBRT is an effective treatment for patients with peripheral stage I non–small-cell lung cancer that is inoperable. However, the long-term effects of this treatment, especially on central lesions, should be carefully documented and reported.

Source: NEJM