Tuberculosis, Drug Resistance, and the History of Modern Medicine.


Tuberculosis is a treatable airborne infectious disease that kills almost 2 million people every year. Multidrug-resistant (MDR) tuberculosis — by convention, a disease caused by strains of Mycobacterium tuberculosis that are resistant to isoniazid and rifampin, the backbone of first-line antituberculosis treatment — afflicts an estimated 500,000 new patients annually. Resistance to antituberculosis agents has been studied since the 1940s; blueprints for containing MDR tuberculosis were laid out in the clinical literature and in practice, in several settings, more than 20 years ago.1,2 Yet today, barely 0.5% of persons with newly diagnosed MDR tuberculosis worldwide receive treatment that is considered the standard of care in the United States.3 Those who have not received appropriate treatment continue to fuel a global pandemic that now includes strains resistant to most — and by some accounts all — classes of drugs tested. 4,5 Despite the enormity of the threat, investments to contain the epidemic and to cure infected patients have been halting and meager when compared, for example, with those made to address the acquired immunodeficiency syndrome (AIDS) pandemic. In this essay we seek to elucidate the reasons for the anemic response to drug-resistant tuberculosis by examining the recent history of tuberculosis policy.

Research in Tuberculosis — Midwife of Modern Biomedicine

On the evening of March 24, 1882, when Robert Koch completed his presentation on the infectious cause of tuberculosis, silence enveloped the crowded room at the Berlin Physiological Society.6 A means of combating tuberculosis — a disease that in the 19th century caused, by some accounts, about 25% of all deaths in Massachusetts and New York and claimed the lives of one fourth of Europe’s population — was now within reach.7 Koch summarized the importance of his findings, for which he received the 1905 Nobel Prize, in a manuscript published in the Berliner Klinische Wochenschrift shortly after his announcement: “In the future the fight against this terrible plague of mankind will deal no longer with an undetermined something, but with a tangible parasite, whose living conditions are for the most part known and can be investigated further.”8

But therapy lagged. It was not until 60 years later, in 1943, that the first effective antituberculosis agent, streptomycin, was isolated in the laboratory of Selman Waksman at Rutgers University (see timeline, available with the full text of this article at NEJM.org). In November 1944, a patient with tuberculosis received streptomycin and was declared cured of the disease.6 Other cases of successful treatment soon followed.9,10 The British Medical Research Council conducted the first large-scale clinical trial of streptomycin in 1948.11 This study, said to be the world’s first published drug trial that involved the randomization of participants, set the methodologic standard for modern randomized, controlled trials. Although many patients were cured, a substantial proportion had a relapse; mycobacterial isolates cultured from the latter patients showed resistance to streptomycin.12 That same year, two new antituberculosis agents, thiacetazone and para-aminosalicylic acid, came on the market. When either of these agents was administered with streptomycin, cure rates rose and acquired antibiotic resistance declined.13 In 1951, isonicotinic acid hydrazide (isoniazid) was tested at Sea View Hospital in New York; it dramatically improved clinical outcomes and was soon introduced for wider use.14 Isoniazid was followed by the development of pyrazinamide (1952), cycloserine (1952), ethionamide (1956), rifampin (1957), and ethambutol (1962).

With its high level of efficacy and ease of administration, rifampin revolutionized the treatment of tuberculosis.15-17 But the advent of every new drug led to the selection of mutations conferring resistance to it. Resistance to rifampin was observed soon after it was first administered.18 Laboratory data from trials revealed the rapid onset of isoniazid resistance among patients receiving monotherapy and the suppression of resistance when isoniazid was given in combination with streptomycin or para-aminosalicylic acid.19 These observations led to the use of multidrug treatment regimens — a strategy widely used today to treat a variety of infectious diseases and cancers. Ultimately, through a series of multicountry clinical trials led by the British Medical Research Council, a four-drug regimen was recommended for use in patients with newly diagnosed tuberculosis. The backbone of such empirical regimens was the combination of isoniazid and rifampin, the most effective and reasonably well-tolerated oral agents, given for 6 to 8 months. Thus, short-course chemotherapy was born.19

Drug resistance, however, has remained a challenge. The early hypothesis that resistance always conferred a loss of bacterial fitness, and hence led to lower case fatality rates and decreased transmission of such strains, had been disproved by the 1950s.19 The first national drug-resistance survey in the world, which involved 974 clinical isolates cultured from newly diagnosed cases of tuberculosis in Britain (1955–1956), showed strains that were resistant to streptomycin (2.5%), para-aminosalicylic acid (2.6%), and isoniazid (1.3%).20 Similarly, data from the United States showed that isoniazid resistance increased from 6.3% (between 1961 and 1964) to 9.7% (between 1965 and 1968) among patients with newly diagnosed tuberculosis.21 Between 1970 and 1990, there were numerous outbreaks of drug-resistant tuberculosis involving strains resistant to two or more drugs.17,22,23 As early as 1970, an outbreak in New York City of highly virulent tuberculosis that was resistant to multiple drugs proved to be a grim reminder that resistance did not necessarily reduce a microbe’s fitness: the index patient died; 23 of 28 close contacts had evidence of new infection, and active, drug-resistant disease developed in 6 of these 23 contacts, 5 of whom were children.21

Tuberculosis, whether caused by drug-susceptible or drug-resistant strains, rarely made even medical headlines, in part because its importance as a cause of death continued to decline in areas in which headlines are written. In such settings, where many of the social determinants of tuberculosis — extreme poverty, severe malnutrition, and overcrowded living conditions — became the exception rather than the norm, some public health experts declared that “virtual elimination of the disease as a public health problem” was in sight.24 In the United States, federal funding for tuberculosis research was cut; consequently, drug discovery, development of diagnostics, and vaccine research ground almost to a halt.17

The Great Divergence in Tuberculosis Policy

Optimism that tuberculosis would soon be eliminated was not restricted to wealthy countries. At the 1978 International Conference on Primary Health Care in Alma-Ata (now called Almaty), Kazakhstan, delegates from around the world endorsed the goal of “health for all by the year 2000.” The eradication of smallpox had been announced the previous year, and the future of international public health looked promising to many who were gathered there.

But it was not to be. By the mid-20th century, tuberculosis outcomes had diverged along the fault lines of the global economy: while tuberculosis became rare in countries where income was high, epidemics of the disease raged on in low-income settings. In 1982, the Mexican government defaulted on many of its loan payments, triggering a debt crisis in many countries with weak economies. Increasing numbers of international health donors and policymakers, slow to contribute resources toward the ambitious Alma-Ata agenda, embraced the idea of selective primary health care: discrete, targeted, and inexpensive interventions.25,26 Bilateral assistance withered, and poor countries became increasingly reliant on loans from international financial institutions such as the World Bank, which based its health agenda on the principles of “cost-effectiveness” and “affordable health for all” — the latter concept a nod to the Alma-Ata Declaration.27

Selective primary health care offered clear targets, measurable outcomes, and a high return on health investments, all of which appealed to donors worried about investing in countries that were on the brink of default.28,29 But several leading causes of disability and death, including tuberculosis, were deemed too costly and complex to address in resource-poor settings and were largely excluded from the emerging, constricted agenda for effective health investments. “Leprosy and tuberculosis require years of drug therapy and even longer follow-up periods to ensure cure,” wrote two of the architects of selective primary health care in 1979. “Instead of attempting immediate, large-scale treatment programs for these infections, the most efficient approach may be to invest in research and development of less costly and more efficacious means of prevention and therapy.”25

But tuberculosis, which persisted in settings of poverty, could not be hidden away for long. In 1993, the World Bank began to use disability-adjusted life-years — a means of measuring the “cost-effectiveness” of a given health intervention that took into account morbidity, mortality, and age — to determine which health interventions to support.30 As a result of this new economic calculus, short-course chemotherapy for tuberculosis was declared a highly “cost-effective” intervention and gained momentum.31 Seizing the opportunity, the World Health Organization (WHO) shaped and promoted the DOTS (directly observed therapy, short-course) strategy, an approach that conformed to the selective primary health care agenda: simple to treat, algorithmic, and requiring no expensive inputs. According to this strategy, the diagnosis was to be made with the use of smear microscopy alone — in spite of the insensitivity and inability of this technique to detect drug resistance — and the treatment approach was to be based on the empirical use of first-line antituberculosis agents only. 32 Facility-based infection control was not part of the DOTS strategy. Despite these exclusions, DOTS was an important development in global tuberculosis policy. Increasingly, poor countries began implementing the DOTS approach; many lives were saved and many new cases averted. However, for children with tuberculosis, people with both tuberculosis and advanced disease from the human immunodeficiency virus (HIV), and the increasing proportion of patients infected with strains of tuberculosis that were already drug-resistant, the DOTS strategy provided limited options for prompt diagnosis and cure.

The Emergence of MDR Tuberculosis Globally

These shifts in tuberculosis policy — linked to the reconceptualization of this leading infectious killer of young adults and children from a disease deemed to be costly and difficult to treat to a disease deemed to be “cost-effective” to treat and slated for eradication — convey precisely what is meant by the “social construction of disease.”33 M. tuberculosis did not conform to the regnant disease-control strategy, and resistant strains continued to emerge and to be transmitted because empirical treatment with first-line antituberculosis drugs was ineffective for those sick with strains resistant to these drugs. HIV infection fanned epidemics of tuberculosis. In the late 1980s and early 1990s, outbreaks of MDR tuberculosis were again reported in the United States.17 Genetic analysis of drug-resistant strains showed that airborne transmission of undetected and untreated strains played a major role in these outbreaks, disabusing practitioners of the notion that resistance stemmed solely from “sporadic pill taking.”17,34 Public health officials developed a national action plan to combat drug-resistant tuberculosis and to increase funding for relevant research.17,35-37 The experience in New York City offered a blueprint that was quite different from the DOTS strategy; it consisted of diagnosis with the use of mycobacterial culture and fast-track drug-susceptibility testing, access to second-line antituberculosis medications, proper infection control, and delivery of medications under direct observation.1

Outbreaks of MDR tuberculosis in the United States were a harbinger of the coming global pandemic. By the early-to-mid-1990s, MDR tuberculosis had been found wherever the diagnostic capacity existed to reveal it. But in contrast to the U.S. strategy, the WHO — the principal standard-setting body for many countries — continued to advocate the use of sputum-smear microscopy and first-line antituberculosis treatment alone for combating epidemics in resource-poor settings. Some international policymakers thought that treating MDR tuberculosis would be too expensive and complex — claims similar to those made about treating drug-susceptible tuberculosis before this approach was found to be “cost-effective” — and would distract attention from the newly branded (and often successful) DOTS strategy.38 Contemporaneous experience in the United States and in several countries in the former Soviet Union suggested, however, that short-course chemotherapy was ineffective against strains shown to be resistant to precisely those drugs on which such therapy was based.1,17,39,40

The Limits of Short-Course Chemotherapy

The failure of short-course chemotherapy against MDR tuberculosis, though unsurprising clinically, was difficult politically. In Peru, for example, a campaign to promote the DOTS strategy had been so successful in making short-course chemotherapy available that the country’s leaders elevated it as a point of national pride. Peru emerged as a crucible for debates about the treatment and management of MDR tuberculosis in poor countries.2 In 1995, an outbreak in a shantytown in the northern reaches of Lima was identified.41 Many patients were infected with strains found to have broad-spectrum resistance to first-line drugs. Nongovernmental organizations worked with the Peruvian Health Ministry to apply the standard-of-care treatment used in New York City and elsewhere in the United States. The strategy was modified to provide community-based care, with good results.42 After arguing that the DOTS strategy alone could rein in the mutant bacteria, the WHO and other international public health authorities advised the Peruvian government to adopt a low-cost, standardized regimen for the treatment of MDR tuberculosis rather than protocols based on the results of drug-susceptibility testing. In the absence of tailored therapy, many hundreds of deaths occurred among some of Lima’s poorest people.43 As expected, amplification of drug resistance was documented.44,45

By the end of the 1990s, facing mounting evidence that MDR tuberculosis could be treated effectively in resource-poor settings,46,47 a multi-institutional mechanism — the Green Light Committee — was created to encourage and learn from pilot projects for treating MDR tuberculosis.2,17,48 This coincided with a grant from the Bill and Melinda Gates Foundation to scale up treatment of MDR tuberculosis in Peru and elsewhere and to change global policy.

Tuberculosis Policy and Global Health Equity

Drug resistance is well established as an inevitable outcome of antibiotic use; the fault lines of the MDR tuberculosis pandemic are largely man-made. The contours of global efforts against tuberculosis have always been mediated by both biologic and social determinants, and the reasons for the divergence in the rates of tuberculosis and drug resistance between rich and poor countries are biosocial.49 As case rates dropped in wealthy countries, funding for research and implementation programs dried up, even though tuberculosis remained the world’s leading infectious killer of young adults throughout the 20th century. Tuberculosis “control” in the 1990s was defined by the legacy of selective primary health care: targeted, “cost-effective” interventions packaged together, in the case of tuberculosis, as the DOTS strategy. Such protocols helped standardize tuberculosis treatment around the world — a process that was sorely needed — but they hamstrung practitioners wishing to address diagnostic and therapeutic complexities that could not be addressed by the use of sputum-smear microscopy and short-course chemotherapy or other one-size-fits-all approaches. These complexities, which now range from pan-resistant tuberculosis to undiagnosed pediatric disease, account for more than a trivial fraction of the 9 million new cases of tuberculosis and the almost 2 million deaths from this disease that occur around the globe each year.

The history of divergent policies for combating drug-resistant tuberculosis shows that decades of clinical research and effective programs in high-income settings did not lead to the deployment of similar approaches in settings of poverty. Achieving that goal demands a commitment to equity and to health care delivery.50 The U.S. response to the outbreaks of MDR tuberculosis in New York City and elsewhere was bold and comprehensive; it was designed to halt the epidemic.1,17 A similar response has not yet been attempted in low- and middle-income countries. Instead, selective primary health care and “cost-effectiveness” have shaped an anemic response to the ongoing global pandemic.

New diagnostics and therapeutics are urgently needed; most of the methods used currently were developed decades ago. Today, we have rapid nucleic acid–based tests for drug-resistant tuberculosis, sound models for laboratory expansion and for treatment delivery, and several drug candidates in the pipeline. To tackle tuberculosis, we also need an equity plan that takes seriously the biosocial complexity of a lethal airborne infection that has stalked us for centuries. The global AIDS effort of the past decade has shown how much can be accomplished in global health when effective diagnosis and care are matched with funding and political will. Stinting on investments or on bold action against tuberculosis — in all its forms — will ensure that it remains a leading killer of people living in poverty in this decade and the next.

Source Information

From the Program in Infectious Disease and Social Change, Department of Global Health and Social Medicine, Harvard Medical School; the Division of Global Health Equity, Brigham and Women’s Hospital; and Partners in Health — all in Boston.

Source: NEJM

 

 

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.